From what I have read, one should prefer to use generic Seq when defining sequences instead of specific implementations such as List or Vector.
Though I have parts of my code when a sequence will be used mostly for full traversal (mapping, filtering, etc) and some parts of my code where the same sequence will be used for indexing operations (indexOf, lastIndexWhere).
In the first case, I think it is better to use LinearSeq (implem is List) whereas in the second case it is better to use IndexedSeq (implem is Vector).
My question is: do I need to explicitly call the conversion method toList and toIndexedSeqin my code or is the conversion done under the hood in an intelligent manner ? If I use these conversions, is it a penalty for performance when going back and forth between IndexedSeq and LinearSeq ?
Thanks in advance
Vector will almost always out-perform List. Unless your algorithm uses only ::, head and tail, Vector will be faster than List.
Using List is more of a conceptual question on your algorithm (data is stack-structured, only accessing head/tail, only adding elements by prepending, use of pattern matching (which can be used with Vector, just feels more natural to me to use it with List)).
You might want to look at Why should I use vector in scala
Now for some nice number to compare (obviously not a 'real' benchmark, but eh) :
val l = List.range(1,1000000)
val a = Vector.range(1,1000000)
import System.{currentTimeMillis=> milli}
val startList = milli
l.map(_*2).map(_+2).filter(_%2 == 0)
println(s"time for list map/filter operations : ${milli - startList}")
val startVector = milli
a.map(_*2).map(_+2).filter(_%2 == 0)
println(s"time for vector map/filter operations : ${milli - startVector}")
Output :
time for list map/filter operations : 1214
time for vector map/filter operations : 364
Edit :
Just realized this doesn't actually answer your question. As far as I know, you will have to call toList/toVector yourself. As for performances, it depends on your sequence, but unless you're going back and forth all the time, it shouldn't be a problem.
Once again, not a serious benchmark, but :
val startConvToList = milli
a.toList
println(s"time for conversion to List: ${milli - startConvToList}")
val startConvToVector = milli
l.toVector
println(s"time for conversion to Vector: ${milli - startConvToVector}")
Output :
time for conversion to List: 48
time for conversion to Vector: 18
I have done the same for indexOf and Vector is also more performant
val l = List.range(1,1000000)
val a = Vector.range(1,1000000)
import System.{currentTimeMillis=> milli}
val startList = milli
l.indexOf(500000)
println("time for list index operation : " + (milli - startList))
val startVector = milli
a.indexOf(500000)
println("time for vector index operation : " + (milli - startVector))
Output :
time for list index operation : 36
time for vector index operation : 33
So I guess I should use Vector all the times in internal implementations but I must use Seq when I build interface as specified here :
Difference between a Seq and a List in Scala
Related
I am trying to improve the performance of my code by removing any sources of type instability.
For example, I have several instances of Array{Any} declarations, which I know generally destroy performance. Here is a minimal example (greatly simplified compared to my code) of a 2D Array of LinearInterpolation objects, i.e
n,m=5,5
abstract_arr=Array{Any}(undef,n+1,m+1)
arr_x=LinRange(1,10,100)
for l in 1:n
for alpha in 1:m
abstract_arr[l,alpha]=LinearInterpolation(arr_x,alpha.*arr_x.^n)
end
end
so that typeof(abstract_arr) gives Array{Any,2}.
How can I initialize abstract_arr to avoid using Array{Any} here?
And how can I do this in general for Arrays whose entries are structures like Dicts() where the Dicts() are dictionaries of 2-tuples of Float64?
If you make a comprehension, the type will be figured out for you:
arr = [LinearInterpolation(arr_x, ;alpha.*arr_x.^n) for l in 1:n, alpha in 1:m]
isconcretetype(eltype(arr)) # true
When it can predict the type & length, it will make the right array the first time. When it cannot, it will widen or extend it as necessary. So probably some of these will be Vector{Int}, and some Vector{Union{Nothing, Int}}:
[rand()>0.8 ? nothing : 0 for i in 1:3]
[rand()>0.8 ? nothing : 0 for i in 1:3]
[rand()>0.8 ? nothing : 0 for i in 1:10]
The main trick is that you just need to know the type of the object that is returned by LinearInterpolation, and then you can specify that instead of Any when constructing the array. To determine that, let's look at the typeof one of these objects
julia> typeof(LinearInterpolation(arr_x,arr_x.^2))
Interpolations.Extrapolation{Float64, 1, ScaledInterpolation{Float64, 1, Interpolations.BSplineInterpolation{Float64, 1, Vector{Float64}, BSpline{Linear{Throw{OnGrid}}}, Tuple{Base.OneTo{Int64}}}, BSpline{Linear{Throw{OnGrid}}}, Tuple{LinRange{Float64}}}, BSpline{Linear{Throw{OnGrid}}}, Throw{Nothing}}
This gives a fairly complicated type, but we don't necessarily need to use the whole thing (though in some cases it might be more efficient to). So for instance, we can say
using Interpolations
n,m=5,5
abstract_arr=Array{Interpolations.Extrapolation}(undef,n+1,m+1)
arr_x=LinRange(1,10,100)
for l in 1:n
for alpha in 1:m
abstract_arr[l,alpha]=LinearInterpolation(arr_x,alpha.*arr_x.^n)
end
end
which gives us a result of type
julia> typeof(abstract_arr)
Matrix{Interpolations.Extrapolation} (alias for Array{Interpolations.Extrapolation, 2})
Since the return type of this LinearInterpolation does not seem to be of known size, and
julia> isbitstype(typeof(LinearInterpolation(arr_x,arr_x.^2)))
false
each assignment to this array will still trigger allocations, and consequently there actually may not be much or any performance gain from the added type stability when it comes to filling the array. Nonetheless, there may still be performance gains down the line when it comes to using values stored in this array (depending on what is subsequently done with them).
maybe it is a stupid question, but I have this doubt and I cannot find a response...
If I have a map operation on a list of complex objects and to make the code more readable I use intermediate variables inside the map the performance can change?
For example this are two versions of the same code:
profilesGroupedWithIds map {
c =>
val blockId = c._2
val entityIds = c._1._2
val entropy = c._1._1
if (separatorID < 0) BlockDirty(blockId, entityIds.swap, entropy)
else BlockClean(blockId, entityIds, entropy)
}
..
profilesGroupedWithIds map {
c =>
if (separatorID < 0) BlockDirty(c._2, c._1._2.swap, c._1._1)
else BlockClean(c._2, c._1._2, c._1._1)
}
As you can see the first version is more readable than the second one.
But the efficiency is the same? Or the three variables that I have created inside the map have to be removed by the garbage collector and this will reduce the perfomance (suppose that 'profilesGroupedWithIds' is a very big list)?
Thanks
Regards
Luca
The vals are just aliases for the tuple elements. So the generated java bytecode will be identical in both cases, and so will be the performance.
More importantly, the first variant is much better code since it clearly conveys the intent.
Here is a third variant that avoids accessing the tuple elements _1 and _2 entirely:
profilesGroupedWithIds map {
case ((entropy,entityIds),blockId) =>
if (separatorID < 0) BlockDirty(blockId, entityIds.swap, entropy)
else BlockClean(blockId, entityIds, entropy)
}
When there is a collection and you must perform two or more operations on all of its elements, what is faster?:
val f1: String => String = _.reverse
val f2: String => String = _.toUpperCase
val elements: Seq[String] = List("a", "b", "c")
iterate multiple times and perform one operation on one loop
val result = elements.map(f1).map(f2)
This approach does have the advantage, that the result after application of the first function could be reused.
iterate one time and perform all operation on each element together
val result = elements.map(element => f2(f1(element)))
or
val result = elements.map(element => f1.compose(f2)
Is there any difference in performance between these two approaches? And if yes, which is faster?
Here's the thing, transformation of a collection is more or less of runtime O(N) , * runtime cost of all the functions applied. So I doubt the 2nd set of choices you present above would make even the slightest difference in runtime. The first option you list, is a different story. New collection creation can be avoided, because that could result in overhead. That's where "view" collections come in (see this good example I spotted)
In Scala, what does "view" do?
If you had the apply several mapping operations you might do this:
val result = elements.view.map(f1).map(f2).force
(force at the end, causes all functions to evaluate)
The 2nd set of examples above would maybe be a tiny bit faster, but the "view" option could make your code more readable if you had a lot of these or complex anonymous functions used in the mapping.
Composing functions to produce a single pass transformation will probably gain you some performance, but will quickly become unreadable. Consider using views as an alernative. While this will create intermediate collections:
val result = elements.map(f1).map(f2)
This will perform lazy evaluation and will perform functional composition the same way you do:
val result = elements.view.map(f1).map(f2)
Notice that result type will be SeqView so you might want to convert it to list later with toList.
I have a class with few Int and Double fields. What is the fastes way to copy all data from one object to another?
class IntFields {
private val data : Array[Int] = Array(0,0)
def first : Int = data(0)
def first_= (value: Int) = data(0) = value
def second : Int = data(1)
def second_= (value : Int) = data(1) = value
def copyFrom(another : IntFields) =
Array.copy(another.data,0,data,0,2)
}
This is the way I may suggest. But I doubt it is really effective, since I have no clear understanding scala's internals
update1:
In fact I'm searching for scala's equivalent of c++ memcpy. I need just take one simple object and copy it contents byte by byte.
Array copying is just a hack, I've googled for normal scala supported method and find none.
update2:
I've tried to microbenchmark two holders: simple case class with 12 variables and one backed up with array. In all benchmarks (simple copying and complex calculations over collection) array-based solution works slower for about 7%.
So, I need other means for simulating memcpy.
Since both arrays used for Array.copy are arrays of primitive integers (i.e. it is not the case that one of the holds boxed integers, in which case a while loop with boxing/unboxing would have been used to copy the elements), it is equally effective as the Java System.arraycopy is. Which is to say - if this were a huge array, you would probably see the difference in performance compared to a while loop in which you copy the elements. Since the array only has 2 elements, it is probably more efficient to just do:
def copyFrom(another: IntFields) {
data(0) = another.data(0)
data(1) = another.data(1)
}
EDIT:
I'd say that the fastest thing is to just copy the fields one-by-one. If performance is really important, you should consider using Unsafe.getInt - some report it should be faster than using System.arraycopy for small blocks: https://stackoverflow.com/questions/5574241/interesting-uses-of-sun-misc-unsafe
I wanted to compare the performance characteristics of immutable.Map and mutable.Map in Scala for a similar operation (namely, merging many maps into a single one. See this question). I have what appear to be similar implementations for both mutable and immutable maps (see below).
As a test, I generated a List containing 1,000,000 single-item Map[Int, Int] and passed this list into the functions I was testing. With sufficient memory, the results were unsurprising: ~1200ms for mutable.Map, ~1800ms for immutable.Map, and ~750ms for an imperative implementation using mutable.Map -- not sure what accounts for the huge difference there, but feel free to comment on that, too.
What did surprise me a bit, perhaps because I'm being a bit thick, is that with the default run configuration in IntelliJ 8.1, both mutable implementations hit an OutOfMemoryError, but the immutable collection did not. The immutable test did run to completion, but it did so very slowly -- it takes about 28 seconds. When I increased the max JVM memory (to about 200MB, not sure where the threshold is), I got the results above.
Anyway, here's what I really want to know:
Why do the mutable implementations run out of memory, but the immutable implementation does not? I suspect that the immutable version allows the garbage collector to run and free up memory before the mutable implementations do -- and all of those garbage collections explain the slowness of the immutable low-memory run -- but I'd like a more detailed explanation than that.
Implementations below. (Note: I don't claim that these are the best implementations possible. Feel free to suggest improvements.)
def mergeMaps[A,B](func: (B,B) => B)(listOfMaps: List[Map[A,B]]): Map[A,B] =
(Map[A,B]() /: (for (m <- listOfMaps; kv <-m) yield kv)) { (acc, kv) =>
acc + (if (acc.contains(kv._1)) kv._1 -> func(acc(kv._1), kv._2) else kv)
}
def mergeMutableMaps[A,B](func: (B,B) => B)(listOfMaps: List[mutable.Map[A,B]]): mutable.Map[A,B] =
(mutable.Map[A,B]() /: (for (m <- listOfMaps; kv <- m) yield kv)) { (acc, kv) =>
acc + (if (acc.contains(kv._1)) kv._1 -> func(acc(kv._1), kv._2) else kv)
}
def mergeMutableImperative[A,B](func: (B,B) => B)(listOfMaps: List[mutable.Map[A,B]]): mutable.Map[A,B] = {
val toReturn = mutable.Map[A,B]()
for (m <- listOfMaps; kv <- m) {
if (toReturn contains kv._1) {
toReturn(kv._1) = func(toReturn(kv._1), kv._2)
} else {
toReturn(kv._1) = kv._2
}
}
toReturn
}
Well, it really depends on what the actual type of Map you are using. Probably HashMap. Now, mutable structures like that gain performance by pre-allocating memory it expects to use. You are joining one million maps, so the final map is bound to be somewhat big. Let's see how these key/values get added:
protected def addEntry(e: Entry) {
val h = index(elemHashCode(e.key))
e.next = table(h).asInstanceOf[Entry]
table(h) = e
tableSize = tableSize + 1
if (tableSize > threshold)
resize(2 * table.length)
}
See the 2 * in the resize line? The mutable HashMap grows by doubling each time it runs out of space, while the immutable one is pretty conservative in memory usage (though existing keys will usually occupy twice the space when updated).
Now, as for other performance problems, you are creating a list of keys and values in the first two versions. That means that, before you join any maps, you already have each Tuple2 (the key/value pairs) in memory twice! Plus the overhead of List, which is small, but we are talking about more than one million elements times the overhead.
You may want to use a projection, which avoids that. Unfortunately, projection is based on Stream, which isn't very reliable for our purposes on Scala 2.7.x. Still, try this instead:
for (m <- listOfMaps.projection; kv <- m) yield kv
A Stream doesn't compute a value until it is needed. The garbage collector ought to collect the unused elements as well, as long as you don't keep a reference to the Stream's head, which seems to be the case in your algorithm.
EDIT
Complementing, a for/yield comprehension takes one or more collections and return a new collection. As often as it makes sense, the returning collection is of the same type as the original collection. So, for example, in the following code, the for-comprehension creates a new list, which is then stored inside l2. It is not val l2 = which creates the new list, but the for-comprehension.
val l = List(1,2,3)
val l2 = for (e <- l) yield e*2
Now, let's look at the code being used in the first two algorithms (minus the mutable keyword):
(Map[A,B]() /: (for (m <- listOfMaps; kv <-m) yield kv))
The foldLeft operator, here written with its /: synonymous, will be invoked on the object returned by the for-comprehension. Remember that a : at the end of an operator inverts the order of the object and the parameters.
Now, let's consider what object is this, on which foldLeft is being called. The first generator in this for-comprehension is m <- listOfMaps. We know that listOfMaps is a collection of type List[X], where X isn't really relevant here. The result of a for-comprehension on a List is always another List. The other generators aren't relevant.
So, you take this List, get all the key/values inside each Map which is a component of this List, and make a new List with all of that. That's why you are duplicating everything you have.
(in fact, it's even worse than that, because each generator creates a new collection; the collections created by the second generator are just the size of each element of listOfMaps though, and are immediately discarded after use)
The next question -- actually, the first one, but it was easier to invert the answer -- is how the use of projection helps.
When you call projection on a List, it returns new object, of type Stream (on Scala 2.7.x). At first you may think this will only make things worse, because you'll now have three copies of the List, instead of a single one. But a Stream is not pre-computed. It is lazily computed.
What that means is that the resulting object, the Stream, isn't a copy of the List, but, rather, a function that can be used to compute the Stream when required. Once computed, the result will be kept so that it doesn't need to be computed again.
Also, map, flatMap and filter of a Stream all return a new Stream, which means you can chain them all together without making a single copy of the List which created them. Since for-comprehensions with yield use these very functions, the use of Stream inside the prevent unnecessary copies of data.
Now, suppose you wrote something like this:
val kvs = for (m <- listOfMaps.projection; kv <-m) yield kv
(Map[A,B]() /: kvs) { ... }
In this case you aren't gaining anything. After assigning the Stream to kvs, the data hasn't been copied yet. Once the second line is executed, though, kvs will have computed each of its elements, and, therefore, will hold a complete copy of the data.
Now consider the original form::
(Map[A,B]() /: (for (m <- listOfMaps.projection; kv <-m) yield kv))
In this case, the Stream is used at the same time it is computed. Let's briefly look at how foldLeft for a Stream is defined:
override final def foldLeft[B](z: B)(f: (B, A) => B): B = {
if (isEmpty) z
else tail.foldLeft(f(z, head))(f)
}
If the Stream is empty, just return the accumulator. Otherwise, compute a new accumulator (f(z, head)) and then pass it and the function to the tail of the Stream.
Once f(z, head) has executed, though, there will be no remaining reference to the head. Or, in other words, nothing anywhere in the program will be pointing to the head of the Stream, and that means the garbage collector can collect it, thus freeing memory.
The end result is that each element produced by the for-comprehension will exist just briefly, while you use it to compute the accumulator. And this is how you save keeping a copy of your whole data.
Finally, there is the question of why the third algorithm does not benefit from it. Well, the third algorithm does not use yield, so no copy of any data whatsoever is being made. In this case, using projection only adds an indirection layer.