I am trying do implement a simulation of spark streaming application. Initially I read all the batches and then I add them InputDStream in a different rdds. After 90 iterations I have read more than 5GB of data but only 14.4MB in storage mamory https://ibb.co/GHGsTLW what may causes this problem?
The size of data is only 75MB.
Is this the reason why I have overhead from garbage collector?
val inputData: mutable.Queue[RDD[DBSCANPoint]] = mutable.Queue()
val inputStream: InputDStream[DBSCANPoint] = ssc.queueStream(inputData)
for (i <- 0 to total_batches + 1) {
val inputRDD = ssc.sparkContext.textFile(inputPath + "/batch" + i + ".txt", this.cores)
.map(line => line.split(' '))
.map(line => DBSCANPoint(Vectors.dense(GlobalVariables.newPointId(), line(0).toDouble, line(1).toDouble))).cache()
inputData.enqueue(inputRDD)
}
inputStream.foreachRDD(rdd => {
//rdd computation
})
Related
The intention of the program is to run through a directory, if a file is an excel spreadsheet it should open it, extract and manipulate some data then move onto the next file. As this is a laborious process I have tried to split the task across multiple threads. Even after this is only using 20% of the total CPU capacity, and it didn't particular speed up.
def extract_data(unique_file_names):
global rootdir
global newarray
global counter
global t0
string = rootdir + "\\" + str(unique_file_names[0])
wb = load_workbook(string, read_only = True, data_only=True)
ws = wb["Sheet1"]
df = pd.DataFrame(ws.values)
newarray = df.loc[4:43,:13].values
counter = 0
print("Starting pool")
pool = ThreadPool(processes=20)
pool.map(process, unique_file_names)
pool.close()
def process(filename):
global newarray
global unique_file_names
global counter
global t0
counter+=1
try:
file_name = rootdir + "/" + str(filename)
wb = load_workbook(file_name, read_only = True, data_only=True)
ws = wb["Sheet1"]
df = pd.DataFrame(ws.values)
newarray = np.hstack((newarray, df.loc[4:43,4:13].values))
except:
print("Failure")
pass
print("Time %.2f, Completed %.2f %% " %((time.clock()-t0),counter*100/len(unique_file_names)))
So it roughly takes around a second and a half to process one spreadsheet, but like I said pool.map() made next to no difference. Any suggestions?
Thanks in advance
Below code runs a comparison of users and writes to file. I've removed some code to make it as concise as possible but speed is an issue also in this code :
import scala.collection.JavaConversions._
object writedata {
def getDistance(str1: String, str2: String) = {
val zipped = str1.zip(str2)
val numberOfEqualSequences = zipped.count(_ == ('1', '1')) * 2
val p = zipped.count(_ == ('1', '1')).toFloat * 2
val q = zipped.count(_ == ('1', '0')).toFloat * 2
val r = zipped.count(_ == ('0', '1')).toFloat * 2
val s = zipped.count(_ == ('0', '0')).toFloat * 2
(q + r) / (p + q + r)
} //> getDistance: (str1: String, str2: String)Float
case class UserObj(id: String, nCoordinate: String)
val userList = new java.util.ArrayList[UserObj] //> userList : java.util.ArrayList[writedata.UserObj] = []
for (a <- 1 to 100) {
userList.add(new UserObj("2", "101010"))
}
def using[A <: { def close(): Unit }, B](param: A)(f: A => B): B =
try { f(param) } finally { param.close() } //> using: [A <: AnyRef{def close(): Unit}, B](param: A)(f: A => B)B
def appendToFile(fileName: String, textData: String) =
using(new java.io.FileWriter(fileName, true)) {
fileWriter =>
using(new java.io.PrintWriter(fileWriter)) {
printWriter => printWriter.println(textData)
}
} //> appendToFile: (fileName: String, textData: String)Unit
var counter = 0; //> counter : Int = 0
for (xUser <- userList.par) {
userList.par.map(yUser => {
if (!xUser.id.isEmpty && !yUser.id.isEmpty)
synchronized {
appendToFile("c:\\data-files\\test.txt", getDistance(xUser.nCoordinate , yUser.nCoordinate).toString)
}
})
}
}
The above code was previously an imperative solution, so the .par functionality was within an inner and outer loop. I'm attempting to convert it to a more functional implementation while also taking advantage of Scala's parallel collections framework.
In this example the data set size is 10 but in the code im working on
the size is 8000 which translates to 64'000'000 comparisons. I'm
using a synchronized block so that multiple threads are not writing
to same file at same time. A performance improvment im considering
is populating a separate collection within the inner loop ( userList.par.map(yUser => {)
and then writing that collection out to seperate file.
Are there other methods I can use to improve performance. So that I can
handle a List that contains 8000 items instead of above example of 100 ?
I'm not sure if you removed too much code for clarity, but from what I can see, there is absolutely nothing that can run in parallel since the only thing you are doing is writing to a file.
EDIT:
One thing that you should do is to move the getDistance(...) computation before the synchronized call to appendToFile, otherwise your parallelized code ends up being sequential.
Instead of calling a synchronized appendToFile, I would call appendToFile in a non-synchronized way, but have each call to that method add the new line to some synchronized queue. Then I would have another thread that flushes that queue to disk periodically. But then you would also need to add something to make sure that the queue is also flushed when all computations are done. So that could get complicated...
Alternatively, you could also keep your code and simply drop the synchronization around the call to appendToFile. It seems that println itself is synchronized. However, that would be risky since println is not officially synchronized and it could change in future versions.
I have a list of documents to process, and for each record I want to attach some metadata to the document "member" inside the "corpus" data structure that tm, the R package, generates (from reading in text files).
This for-loop works but it is very slow,
Performance seems to degrade as a function f ~ 1/n_docs.
for (i in seq(from= 1, to=length(corpus), by=1)){
if(opts$options$verbose == TRUE || i %% 50 == 0){
print(paste(i, " ", substr(corpus[[i]], 1, 140), sep = " "))
}
DublinCore(corpus[[i]], "title") = csv[[i,10]]
DublinCore(corpus[[i]], "Publisher" ) = csv[[i,16]] #institutions
}
This may do something to the corpus variable but I don't know what.
But when I put it inside a tm_map() (similar to lapply() function), it runs much faster, but the changes are not made persistent:
i = 0
corpus = tm_map(corpus, function(x){
i <<- i + 1
if(opts$options$verbose == TRUE){
print(paste(i, " ", substr(x, 1, 140), sep = " "))
}
meta(x, tag = "Heading") = csv[[i,10]]
meta(x, tag = "publisher" ) = csv[[i,16]]
})
Variable corpus has empty metadata fields after exiting the tm_map function. It should be filled. I have a few other things to do with the collection.
The R documentation for the meta() function says this:
Examples:
data("crude")
meta(crude[[1]])
DublinCore(crude[[1]])
meta(crude[[1]], tag = "Topics")
meta(crude[[1]], tag = "Comment") <- "A short comment."
meta(crude[[1]], tag = "Topics") <- NULL
DublinCore(crude[[1]], tag = "creator") <- "Ano Nymous"
DublinCore(crude[[1]], tag = "Format") <- "XML"
DublinCore(crude[[1]])
meta(crude[[1]])
meta(crude)
meta(crude, type = "corpus")
meta(crude, "labels") <- 21:40
meta(crude)
I tried many of these calls (with var "corpus" instead of "crude"), but they do not seem to work.
Someone else once seemed to have had the same problem with a similar data set (forum post from 2009, no response)
Here's a bit of benchmarking...
With the for loop :
expr.for <- function() {
for (i in seq(from= 1, to=length(corpus), by=1)){
DublinCore(corpus[[i]], "title") = LETTERS[round(runif(26))]
DublinCore(corpus[[i]], "Publisher" ) = LETTERS[round(runif(26))]
}
}
microbenchmark(expr.for())
# Unit: milliseconds
# expr min lq median uq max
# 1 expr.for() 21.50504 22.40111 23.56246 23.90446 70.12398
With tm_map :
corpus <- crude
expr.map <- function() {
tm_map(corpus, function(x) {
meta(x, "title") = LETTERS[round(runif(26))]
meta(x, "Publisher" ) = LETTERS[round(runif(26))]
x
})
}
microbenchmark(expr.map())
# Unit: milliseconds
# expr min lq median uq max
# 1 expr.map() 5.575842 5.700616 5.796284 5.886589 8.753482
So the tm_map version, as you noticed, seems to be about 4 times faster.
In your question you say that the changes in the tm_map version are not persistent, it is because you don't return x at the end of your anonymous function. In the end it should be :
meta(x, tag = "Heading") = csv[[i,10]]
meta(x, tag = "publisher" ) = csv[[i,16]]
x
I wrote a new combinator for my parser in scala.
Its a variation of the ^^ combinator, which passes position information on.
But accessing the position information of the input element really cost performance.
In my case parsing a big example need around 3 seconds without position information, with it needs over 30 seconds.
I wrote a runnable example where the runtime is about 50% more when accessing the position.
Why is that? How can I get a better runtime?
Example:
import scala.util.parsing.combinator.RegexParsers
import scala.util.parsing.combinator.Parsers
import scala.util.matching.Regex
import scala.language.implicitConversions
object FooParser extends RegexParsers with Parsers {
var withPosInfo = false
def b: Parser[String] = regexB("""[a-z]+""".r) ^^# { case (b, x) => b + " ::" + x.toString }
def regexB(p: Regex): BParser[String] = new BParser(regex(p))
class BParser[T](p: Parser[T]) {
def ^^#[U](f: ((Int, Int), T) => U): Parser[U] = Parser { in =>
val source = in.source
val offset = in.offset
val start = handleWhiteSpace(source, offset)
val inwo = in.drop(start - offset)
p(inwo) match {
case Success(t, in1) =>
{
var a = 3
var b = 4
if(withPosInfo)
{ // takes a lot of time
a = inwo.pos.line
b = inwo.pos.column
}
Success(f((a, b), t), in1)
}
case ns: NoSuccess => ns
}
}
}
def main(args: Array[String]) = {
val r = "foo"*50000000
var now = System.nanoTime
parseAll(b, r)
var us = (System.nanoTime - now) / 1000
println("without: %d us".format(us))
withPosInfo = true
now = System.nanoTime
parseAll(b, r)
us = (System.nanoTime - now) / 1000
println("with : %d us".format(us))
}
}
Output:
without: 2952496 us
with : 4591070 us
Unfortunately, I don't think you can use the same approach. The problem is that line numbers end up implemented by scala.util.parsing.input.OffsetPosition which builds a list of every line break every time it is created. So if it ends up with string input it will parse the entire thing on every call to pos (twice in your example). See the code for CharSequenceReader and OffsetPosition for more details.
There is one quick thing you can do to speed this up:
val ip = inwo.pos
a = ip.line
b = ip.column
to at least avoid creating pos twice. But that still leaves you with a lot of redundant work. I'm afraid to really solve the problem you'll have to build the index as in OffsetPosition yourself, just once, and then keep referring to it.
You could also file a bug report / make an enhancement request. This is not a very good way to implement the feature.
I've got some big (let's say 200 MiB - 2 GiB) textual files filled with tons of duplicated records. Each line can have about 100 or even more exact duplicates spread over the file. The task is to remove all the repetitions, leaving one unique instance of every record.
I've implemented it as follows:
object CleanFile {
def apply(s: String, t: String) {
import java.io.{PrintWriter, FileWriter, BufferedReader, FileReader}
println("Reading " + s + "...")
var linesRead = 0
val lines = new scala.collection.mutable.ArrayBuffer[String]()
val fr = new FileReader(s)
val br = new BufferedReader(fr)
var rl = ""
while (rl != null) {
rl = br.readLine()
if (!lines.contains(rl))
lines += rl
linesRead += 1
if (linesRead > 0 && linesRead % 100000 == 0)
println(linesRead + " lines read, " + lines.length + " unique found.")
}
br.close()
fr.close()
println(linesRead + " lines read, " + lines.length + " unique found.")
println("Writing " + t + "...")
val fw = new FileWriter(t);
val pw = new PrintWriter(fw);
lines.foreach(line => pw.println(line))
pw.close()
fw.close()
}
}
And it takes ~15 minutes (on my Core 2 Duo with 4 GB RAM) to process a 92 MiB file. While the following command:
awk '!seen[$0]++' filename
Takes about a minute to process 1.1 GiB file (which would take many hours with the above code of mine).
What's wrong with my code?
What is wrong is that you're using an array to store the lines. Lookup (lines.contains) takes O(n) in an array, so the whole thing runs in O(n²) time. By contrast, the Awk solution uses a hashtable, meaning O(1) lookup and a total running time of O(n).
Try using a mutable.HashSet instead.
You could also just read all lines and call .distinct on them. I don't know how distinct is implemented, but I'm betting it uses a HashSet to do it.