Here are two solutions to exercise 4.9 in Cay Horstmann's Scala for the Impatient: "Write a function lteqgt(values: Array[Int], v: Int) that returns a triple containing the counts of values less than v, equal to v, and greater than v." One uses tail recursion, the other uses a while loop. I thought that both would compile to similar bytecode but the while loop is slower than the tail recursion by a factor of almost 2. This suggests to me that my while method is badly written.
import scala.annotation.tailrec
import scala.util.Random
object PerformanceTest {
def main(args: Array[String]): Unit = {
val bigArray:Array[Int] = fillArray(new Array[Int](100000000))
println(time(lteqgt(bigArray, 25)))
println(time(lteqgt2(bigArray, 25)))
}
def time[T](block : => T):T = {
val start = System.nanoTime : Double
val result = block
val end = System.nanoTime : Double
println("Time = " + (end - start) / 1000000.0 + " millis")
result
}
#tailrec def fillArray(a:Array[Int], pos:Int=0):Array[Int] = {
if (pos == a.length)
a
else {
a(pos) = Random.nextInt(50)
fillArray(a, pos+1)
}
}
#tailrec def lteqgt(values: Array[Int], v:Int, lt:Int=0, eq:Int=0, gt:Int=0, pos:Int=0):(Int, Int, Int) = {
if (pos == values.length)
(lt, eq, gt)
else
lteqgt(values, v, lt + (if (values(pos) < v) 1 else 0), eq + (if (values(pos) == v) 1 else 0), gt + (if (values(pos) > v) 1 else 0), pos+1)
}
def lteqgt2(values:Array[Int], v:Int):(Int, Int, Int) = {
var lt = 0
var eq = 0
var gt = 0
var pos = 0
val limit = values.length
while (pos < limit) {
if (values(pos) > v)
gt += 1
else if (values(pos) < v)
lt += 1
else
eq += 1
pos += 1
}
(lt, eq, gt)
}
}
Adjust the size of bigArray according to your heap size. Here is some sample output:
Time = 245.110899 millis
(50004367,2003090,47992543)
Time = 465.836894 millis
(50004367,2003090,47992543)
Why is the while method so much slower than the tailrec? Naively the tailrec version looks to be at a slight disadvantage, as it must always perform 3 "if" checks for every iteration, whereas the while version will often only perform 1 or 2 tests due to the else construct. (NB reversing the order I perform the two methods does not affect the outcome).
Test results (after reducing array size to 20000000)
Under Java 1.6.22 I get 151 and 122 ms for tail-recursion and while-loop respectively.
Under Java 1.7.0 I get 55 and 101 ms
So under Java 6 your while-loop is actually faster; both have improved in performance under Java 7, but the tail-recursive version has overtaken the loop.
Explanation
The performance difference is due to the fact that in your loop, you conditionally add 1 to the totals, while for recursion you always add either 1 or 0. So they are not equivalent. The equivalent while-loop to your recursive method is:
def lteqgt2(values:Array[Int], v:Int):(Int, Int, Int) = {
var lt = 0
var eq = 0
var gt = 0
var pos = 0
val limit = values.length
while (pos < limit) {
gt += (if (values(pos) > v) 1 else 0)
lt += (if (values(pos) < v) 1 else 0)
eq += (if (values(pos) == v) 1 else 0)
pos += 1
}
(lt, eq, gt)
}
and this gives exactly the same execution time as the recursive method (regardless of Java version).
Discussion
I'm not an expert on why the Java 7 VM (HotSpot) can optimize this better than your first version, but I'd guess it's because it's taking the same path through the code each time (rather than branching along the if / else if paths), so the bytecode can be inlined more efficiently.
But remember that this is not the case in Java 6. Why one while-loop outperforms the other is a question of JVM internals. Happily for the Scala programmer, the version produced from idiomatic tail-recursion is the faster one in the latest version of the JVM.
The difference could also be occurring at the processor level. See this question, which explains how code slows down if it contains unpredictable branching.
The two constructs are not identical. In particular, in the first case you don't need any jumps (on x86, you can use cmp and setle and add, instead of having to use cmp and jb and (if you don't jump) add. Not jumping is faster than jumping on pretty much every modern architecture.
So, if you have code that looks like
if (a < b) x += 1
where you may add or you may jump instead, vs.
x += (a < b)
(which only makes sense in C/C++ where 1 = true and 0 = false), the latter tends to be faster as it can be turned into more compact assembly code. In Scala/Java, you can't do this, but you can do
x += if (a < b) 1 else 0
which a smart JVM should recognize is the same as x += (a < b), which has a jump-free machine code translation, which is usually faster than jumping. An even smarter JVM would recognize that
if (a < b) x += 1
is the same yet again (because adding zero doesn't do anything).
C/C++ compilers routinely perform optimizations like this. Being unable to apply any of these optimizations was not a mark in the JIT compiler's favor; apparently it can as of 1.7, but only partially (i.e. it doesn't recognize that adding zero is the same as a conditional adding one, but it does at least convert x += if (a<b) 1 else 0 into fast machine code).
Now, none of this has anything to do with tail recursion or while loops per se. With tail recursion it's more natural to write the if (a < b) 1 else 0 form, but you can do either; and with while loops you can also do either. It just so happened that you picked one form for tail recursion and the other for the while loop, making it look like recursion vs. looping was the change instead of the two different ways to do the conditionals.
Related
I am trying to learn Scala and functional programming ideology by rewriting basic exercises. Currently I have trouble with naive approach for generating primes "trial division".
The trouble described below is that I could not rewrite well-known algorithm in functional style preserving efficiency, because I have no suitable immutable data structure, like a List but with fast operations not only on head, but also on the very end.
I started with writing java code which for every odd number tests its divisibility by already found primes (limited by square root of value being tested) - and adds it to the end of the list if no divisor was found.
http://ideone.com/QE8U0I
List<Integer> primes = new ArrayList<>();
primes.add(2);
int cur = 3;
while (primes.size() < 100000) {
for (Integer x : primes) {
if (x * x > cur) {
primes.add(cur);
break;
}
if (cur % x == 0) {
break;
}
}
cur += 2;
}
Now I tried to rewrite it in "functional way" - there was no problem with using recursion instead of loops, but I stuck with immutable collections. Core idea is as following:
http://ideone.com/4DQ6mi
def primes(n: Int) = {
#tailrec
def divisibleByAny(x: Int, list: List[Int]): Boolean = {
if (list.isEmpty) false else {
val h = list.head
h * h <= x && (x % h == 0 || divisibleByAny(x, list.tail))
}
}
#tailrec
def morePrimes(from: Int, prev: List[Int]): List[Int] = {
if (prev.size == n) prev else
morePrimes(from + 2, if (divisibleByAny(from, prev)) prev else prev :+ from)
}
morePrimes(3, List(2))
}
But it is slow - if I understand correctly because operation of adding to the end of immutable list requires creation of new copy of the whole stuff.
I searched over documentation to find more suitable data structure and tried to substitute list with immutable Queue, for it is said:
Adding items to the queue always has cost O(1) ... Removing an item is on average O(1).
But it is still even slower:
http://ideone.com/v8BsuQ
def primes(n: Int) = {
#tailrec
def divisibleByAny(x: Int, list: Queue[Int]): Boolean = {
if (list.isEmpty) false else {
val (h, t) = list.dequeue
h * h <= x && (x % h == 0 || divisibleByAny(x, t))
}
}
#tailrec
def morePrimes(from: Int, prev: Queue[Int]): Queue[Int] = {
if (prev.size == n) prev else
morePrimes(from + 2, if (divisibleByAny(from, prev)) prev else prev.enqueue(from))
}
morePrimes(3, Queue(2))
}
What is going wrong or am I missing something?
P.S. I believe there are other algorithms for generating primes which are more suitable for functional style. I think I've seen some paper. But now I'm interested in this one, or more precisely in existence of suitable data structure.
According to http://docs.scala-lang.org/overviews/collections/performance-characteristics.html Vectors have an amortised constant cost for appending, prepending and seeking. Indeed, using vectors instead of lists in your solution is much faster
def primes(n: Int) = {
#tailrec
def divisibleByAny(x: Int, list: Vector[Int]): Boolean = {
if (list.isEmpty) false else {
val (h +: t) = list
h * h <= x && (x % h == 0 || divisibleByAny(x, t))
}
}
#tailrec
def morePrimes(from: Int, prev: Vector[Int]): Vector[Int] = {
if (prev.length == n) prev else
morePrimes(from + 2, if (divisibleByAny(from, prev)) prev else prev :+ from)
}
morePrimes(3, Vector(2))
}
http://ideone.com/x3k4A3
I think you have 2 main options
Use a Vector - which is better than a list for appending. It is a Bitmapped Trie data structure (http://en.wikipedia.org/wiki/Trie). It’s “effectively” O(1) for appending to (i.e. O(1) on average)
Or...possibly the answer you're not looking for
Use a mutable data structure like ListBuffer - immutability it great to try achieve, and should be your go to collections - but sometimes for efficiency reasons, you may use mutable structures . What is key it to make sure it does not “leak out” of your classes. If you look at the List.scala implementation, you’ll see ListBuffer used a lot internally. However, its coverted back to a List just before it leaves the class. If its good enough for the core Scala libraries, its probably ok for you to use under exceptional cases that warrant it.
Except using Vector, also consider using higher-order functions instead of recursion. That's also a completely valid functional style. On my machine the following implementation of divisibleByAny is about 8x faster, than #Pyetras tailrec implementation when running primes(1000000):
def divisibleByAny(x: Int, list: Vector[Int]): Boolean =
list.view.takeWhile(el => el * el <= x).exists(x % _ == 0)
I just encountered a strange behavior in the Scala Set API. Here is my function stripped of what's related to the rest of the project
def grade(...): Double = {
val setA: HashSet = // get from somewhere else
val setB: HashSet = // get from somewhere else
if ((setA size) == 0 || (setB size) == 0) return 0
else return (setA & setB size) / (setA | set B size)
}
This function is called a lot of time inside a loop, and the whole loop is executed in around 4.5 sec. But when a replace the size of the union by the sum of the sizes (a gross approximation), in order to test the influence of the union operation, the time of execution is reduce to around 0.35 sec...
def grade(...): Double = {
val setA: HashSet = // get from somewhere else
val setB: HashSet = // get from somewhere else
if ((setA size) == 0 || (setB size) == 0) return 0
else return (setA & setB size) / (setA size + set B size)
}
Well, you can't compare a simple operation like a sum of 2 Ints with the union operation of 2 Sets. I expect the performance of these operations to be very different, specially if your Sets contain a lot of elements.
You don't need a union because you already do an intersection. Try the following:
def grade: Double = {
val setA: HashSet = // get from somewhere else
val setB: HashSet = // get from somewhere else
if ((setA size) == 0 || (setB size) == 0) return 0
else {
val inter = setA & setB size
return inter / ((setA size) + (setB size) - inter)
}
}
However, I find your measurement a little odd because I expected both operations (union and intersect) to take around the same amount of time O(n). Removing the union should improve the performance by half (2s)...
Are you using parallel collections, by any chance? Union is performed in a sequential manner, so any parallel collection is first converted into a sequential collection. That might account for the performance.
Other than that, union is about O(n), so you are going form O(n) to O(1), which makes a lot of difference.
The problem involves the Scala PriorityQueue[Array[Int]] performance on large data set. The following operations are needed: enqueue, dequeue, and filter. Currently, my implementation is as follows:
For every element of type Array[Int], there is a complex evaluation function: (I'm not sure how to write it in a more efficient way, because it excludes the position 0)
def eval_fun(a : Array[Int]) =
if(a.size < 2) 3
else {
var ret = 0
var i = 1
while(i < a.size) {
if((a(i) & 0x3) == 1) ret += 1
else if((a(i) & 0x3) == 3) ret += 3
i += 1
}
ret / a.size
}
The ordering with a comparison function is based on the evaluation function: (Reversed, descendent order)
val arr_ord = new Ordering[Array[Int]] {
def compare(a : Array[Int], b : Array[Int]) = eval_fun(b) compare eval_fun(a) }
The PriorityQueue is defined as:
val pq: scala.collection.mutable.PriorityQueue[Array[Int]] = PriorityQueue()
Question:
Is there a more elegant and efficient way to write such a evaluation function? I'm thinking of using fold, but fold cannot exclude the position 0.
Is there a data structure to generate a priorityqueue with unique elements? Applying filter operation after each enqueue operation is not efficient.
Is there a cache method to reduce the evaluation computation? Since when adding a new element to the queue, every element may need to be evaluated by eval_fun again, which is not necessary if evaluated value of every element can be cached. Also, I should mention that two distinct element may have the same evaluated value.
Is there a more efficient data structure without using generic type? Because if the size of elements reaches 10,000 and the size of size reaches 1,000, the performance is terribly slow.
Thanks you.
(1) If you want maximum performance here, I would stick to the while loop, even if it is not terribly elegant. Otherwise, if you use a view on Array, you can easily drop the first element before going into the fold:
a.view.drop(1).foldLeft(0)( (sum, a) => sum + ((a & 0x03) match {
case 0x01 => 1
case 0x03 => 3
case _ => 0
})) / a.size
(2) You can maintain two structures, the priority queue, and a set. Both combined give you a sorted-set... So you could use collection.immutable.SortedSet, but there is no mutable variant in the standard library. Do want equality based on the priority function, or the actual array contents? Because in the latter case, you won't get around comparing arrays element by element for each insertion, undoing the effect of caching the priority function value.
(3) Just put the calculated priority along with the array in the queue. I.e.
implicit val ord = Ordering.by[(Int, Array[Int]), Int](_._1)
val pq = new collection.mutable.PriorityQueue[(Int, Array[Int])]
pq += eval_fun(a) -> a
Well, you can use a tail recursive loop (generally these are more "idiomatic":
def eval(a: Array[Int]): Int =
if (a.size < 2) 3
else {
#annotation.tailrec
def loop(ret: Int = 0, i: Int = 1): Int =
if (i >= a.size) ret / a.size
else {
val mod3 = (a(i) & 0x3)
if (mod3 == 1) loop(ret + 1, i + 1)
else if (mod3 == 3) loop(ret + 3, i + 1)
else loop(ret, i + 1)
}
loop()
}
Then you can use that to initialise a cached priority value:
case class PriorityArray(a: Array[Int]) {
lazy val priority = if (a.size < 2) 3 else {
#annotation.tailrec
def loop(ret: Int = 0, i: Int = 1): Int =
if (i >= a.size) ret / a.size
else {
val mod3 = (a(i) & 0x3)
if (mod3 == 2) loop(ret, i + 1)
else loop(ret + mod3, i + 1)
}
loop()
}
}
You may note also that I removed a redundant & op and have only the single conditional (for when it equals 2, rather than two checks for 1 && 3) – these should have some minimal effect.
There is not a huge difference from 0__'s proposal that just came though.
My answers:
If evaluation is critical, keep it as it is. You might get better performance with recursion (not sure why, but it happens), but you'll certainly get worse performance with pretty much any other approach.
No, there isn't, but you can come pretty close to it just modifying the dequeue operation:
def distinctDequeue[T](q: PriorityQueue[T]): T = {
val result = q.dequeue
while (q.head == result) q.dequeue
result
}
Otherwise, you'd have to keep a second data structure just to keep track of whether an element has been added or not. Either way, that equals sign is pretty heavy, but I have a suggestion to make it faster in the next item.
Note, however, that this requires that ties on the the cost function get solved in some other way.
Like 0__ suggested, put the cost on the priority queue. But you can also keep a cache on the function if that would be helpful. I'd try something like this:
val evalMap = scala.collection.mutable.HashMapWrappedArray[Int], Int
def eval_fun(a : Array[Int]) =
if(a.size < 2) 3
else evalMap.getOrElseUpdate(a, {
var ret = 0
var i = 1
while(i < a.size) {
if((a(i) & 0x3) == 1) ret += 1
else if((a(i) & 0x3) == 3) ret += 3
i += 1
}
ret / a.size
})
import scala.math.Ordering.Implicits._
val pq = new collection.mutable.PriorityQueue[(Int, WrappedArray[Int])]
pq += eval_fun(a) -> (a : WrappedArray[Int])
Note that I did not create a special Ordering -- I'm using the standard Ordering so that the WrappedArray will break the ties. There's little cost to wrap the Array, and you get it back with .array, but, on the other hand, you'll get the following:
Ties will be broken by comparing the array themselves. If there aren't many ties in the cost, this should be good enough. If there are, add something else to the tuple to help break ties without comparing the arrays.
That means all equal elements will be kept together, which will enable you to dequeue all of them at the same time, giving the impression of having kept only one.
And that equals will actually work, because WrappedArray compare like Scala sequences do.
I don't understand what you mean by that fourth point.
I wrote an answer to the first Project Euler question:
Add all the natural numbers below one thousand that are multiples of 3 or 5.
The first thing that came to me was:
(1 until 1000).filter(i => (i % 3 == 0 || i % 5 == 0)).foldLeft(0)(_ + _)
but it's slow (it takes 125 ms), so I rewrote it, simply thinking of 'another way' versus 'the faster way'
(1 until 1000).foldLeft(0){
(total, x) =>
x match {
case i if (i % 3 == 0 || i % 5 ==0) => i + total // Add
case _ => total //skip
}
}
This is much faster (only 2 ms). Why? I'm guess the second version uses only the Range generator and doesn't manifest a fully realized collection in any way, doing it all in one pass, both faster and with less memory. Am I right?
Here the code on IdeOne: http://ideone.com/GbKlP
The problem, as others have said, is that filter creates a new collection. The alternative withFilter doesn't, but that doesn't have a foldLeft. Also, using .view, .iterator or .toStream would all avoid creating the new collection in various ways, but they are all slower here than the first method you used, which I thought somewhat strange at first.
But, then... See, 1 until 1000 is a Range, whose size is actually very small, because it doesn't store each element. Also, Range's foreach is extremely optimized, and is even specialized, which is not the case of any of the other collections. Since foldLeft is implemented as a foreach, as long as you stay with a Range you get to enjoy its optimized methods.
(_: Range).foreach:
#inline final override def foreach[#specialized(Unit) U](f: Int => U) {
if (length > 0) {
val last = this.last
var i = start
while (i != last) {
f(i)
i += step
}
f(i)
}
}
(_: Range).view.foreach
def foreach[U](f: A => U): Unit =
iterator.foreach(f)
(_: Range).view.iterator
override def iterator: Iterator[A] = new Elements(0, length)
protected class Elements(start: Int, end: Int) extends BufferedIterator[A] with Serializable {
private var i = start
def hasNext: Boolean = i < end
def next: A =
if (i < end) {
val x = self(i)
i += 1
x
} else Iterator.empty.next
def head =
if (i < end) self(i) else Iterator.empty.next
/** $super
* '''Note:''' `drop` is overridden to enable fast searching in the middle of indexed sequences.
*/
override def drop(n: Int): Iterator[A] =
if (n > 0) new Elements(i + n, end) else this
/** $super
* '''Note:''' `take` is overridden to be symmetric to `drop`.
*/
override def take(n: Int): Iterator[A] =
if (n <= 0) Iterator.empty.buffered
else if (i + n < end) new Elements(i, i + n)
else this
}
(_: Range).view.iterator.foreach
def foreach[U](f: A => U) { while (hasNext) f(next()) }
And that, of course, doesn't even count the filter between view and foldLeft:
override def filter(p: A => Boolean): This = newFiltered(p).asInstanceOf[This]
protected def newFiltered(p: A => Boolean): Transformed[A] = new Filtered { val pred = p }
trait Filtered extends Transformed[A] {
protected[this] val pred: A => Boolean
override def foreach[U](f: A => U) {
for (x <- self)
if (pred(x)) f(x)
}
override def stringPrefix = self.stringPrefix+"F"
}
Try making the collection lazy first, so
(1 until 1000).view.filter...
instead of
(1 until 1000).filter...
That should avoid the cost of building an intermediate collection. You might also get better performance from using sum instead of foldLeft(0)(_ + _), it's always possible that some collection type might have a more efficient way to sum numbers. If not, it's still cleaner and more declarative...
Looking through the code, it looks like filter does build a new Seq on which the foldLeft is called. The second skips that bit. It's not so much the memory, although that can't but help, but that the filtered collection is never built at all. All that work is never done.
Range uses TranversableLike.filter, which looks like this:
def filter(p: A => Boolean): Repr = {
val b = newBuilder
for (x <- this)
if (p(x)) b += x
b.result
}
I think it's the += on line 4 that's the difference. Filtering in foldLeft eliminates it.
filter creates a whole new sequence on which then foldLeft is called. Try:
(1 until 1000).view.filter(i => (i % 3 == 0 || i % 5 == 0)).reduceLeft(_+_)
This will prevent said effect, merely wrapping the original thing. Exchanging foldLeft with reduceLeft is only cosmetic (in this case).
Now the challenge is, can you think of a yet more efficient way? Not that your solution is too slow in this case, but how well does it scale? What if instead of 1000, it was 1000000000? There is a solution that could compute the latter case just as quickly as the former.
This is an interview question: "How to build a distributed algorithm to compute the balance of the parentheses ?"
Usually he balance algorithm scans a string form left to right and uses a stack to make sure that the number of open parentheses always >= the number of close parentheses and finally the number of open parentheses == the number of close parentheses.
How would you make it distributed ?
You can break the string into chunks and process each separately, assuming you can read and send to the other machines in parallel. You need two numbers for each string.
The minimum nesting depth achieved relative to the start of the string.
The total gain or loss in nesting depth across the whole string.
With these values, you can compute the values for the concatenation of many chunks as follows:
minNest = 0
totGain = 0
for p in chunkResults
minNest = min(minNest, totGain + p.minNest)
totGain += p.totGain
return new ChunkResult(minNest, totGain)
The parentheses are matched if the final values of totGain and minNest are zero.
I would apply the map-reduce algorithm in which the map function would compute a part of the string return either an empty string if parentheses are balanced or a string with the last parenthesis remaining.
Then the reduce function would concatenate the result of two returned strings by map function and compute it again returning the same result than map. At the end of all computations, you'd either obtain an empty string or a string containing the un-balanced parenthesis.
I'll try to have a more detailed explain on #jonderry's answer. Code first, in Scala
def parBalance(chars: Array[Char], chunkSize: Int): Boolean = {
require(chunkSize > 0, "chunkSize must be greater than 0")
def traverse(from: Int, until: Int): (Int, Int) = {
var count = 0
var stack = 0
var nest = 0
for (n <- from until until) {
val cur = chars(c)
if (cur == '(') {
count += 1
stack += 1
}
else if (cur == ')') {
count -= 1
if (stack > 0) stack -= 1
else nest -= 1
}
}
(nest, count)
}
def reduce(from: Int, until: Int): (Int, Int) = {
val m = (until + from) / 2
if (until - from <= chunkSize) {
traverse(from, until)
} else {
parallel(reduce(from, m), reduce(m, until)) match {
case ((minNestL, totGainL), (minNestR, totGainR)) => {
((minNestL min (minNestR + totGainL)), (totGainL + totGainR))
}
}
}
}
reduce(0, chars.length) == (0,0)
}
Given a string, if we remove balanced parentheses, what's left will be in a form )))(((, give n for number of ) and m for number of (, then m >= 0, n <= 0(for easier calculation). Here n is minNest and m+n is totGain. To make a true balanced string, we need m+n == 0 && n == 0.
In a parallel operation, how to we derive those for node from it's left and right? For totGain we just needs to add them up. When calculating n for node, it can just be n(left) if n(right) not contribute or n(right) + left.totGain whichever is smaller.