The problem happened when I try to keep my cached result in a List and try to calculate new DataFrame by all the data from the last list in each iteration. However, Even I use an empty DataFrame and get an empty result each time, the function will suddenly get very slow after about 8~12 round.
Here is my code
testLoop(Nil)
def testLoop(lastDfList:List[DataFrame]){
// do some dummy transformation like union and cache the result
val resultDf = lastDfList.foldLeft(Seq[Data]().toDF){(df, lastDf) => df.union(lastDf)}.cache
// always get 0, of course
println(resultDf.count)
// benchmark action
benchmark(resultDf.count)
testLoop(resultDf::lastDfList)
}
the benchmark result
1~6 round : < 200ms
7 round : 367ms
8 round : 918ms
9 round : 2476ms
10 round : 7833ms
11 round : 24231ms
I don't think GC or Block eviction is the problem in my case since I already use an empty DataFrame, but I don't know what is the cause? Do I misunderstand the meaning of cache or something?
Thanks!
After reading ImDarrenG's solution, I changed my code to be the following:
spark.sparkContext.setCheckpointDir("/tmp")
testLoop(Nil)
def testLoop(lastDfList:List[DataFrame]){
// do some dummy transformation like union and cache the result
val resultDf = lastDfList.foldLeft(Seq[Data]().toDF){(df, lastDf) => df.union(lastDf)}.cache
resultDf.checkpoint()
// always get 0, of course
println(resultDf.count)
// benchmark action
benchmark(resultDf.count)
testLoop(resultDf::lastDfList)
}
But it still become very slow after a few iterations.
Here you create a list of DataFrames by adding resultDf to the beginning of lastDfList and pass that to the next iteration of testLoop:
testLoop(resultDf::lastDfList)
So lastDfList gets longer each pass.
This line creates a new DataFrame by unioning each member of lastDfList:
val resultDf = lastDfList.foldLeft(Seq[Data]().toDF){(df, lastDf) => df.union(lastDf))}.cache
Each member of lastDfList is a union of it's predecessors, therefore, Spark is maintaining a lineage that becomes exponentially larger with each pass of testLoop.
I expect that the increase in time is caused by the housekeeping of the DAG. Caching the dataframes removes the need to repeat transformations, but the lineage must still be maintained by spark.
Cached data or no, it looks like you are building a really complex DAG by unioning each DataFrame with all of it's predecessors with each pass of testLoop.
You could use checkpoint to trim the lineage, and introduce some check to prevent infinite recursion.
According to API and code, checkpoint will return a new Dataset instead of changing original Dataset.
I am doing an iterative computation using flink dataset API.
But the result of each iteration is a part of my complete solution.
(If more details required: I am computing lattice nodes level-wise starting from top towards bottom in each iteration, see Formal Concept Analysis)
If I use flink dataset API with bulk iteration without saving my result, the code will look like below:
val start = env.fromElements((0, BitSet.empty))
val end = start.iterateWithTermination(size) { inp =>
val result = ObjData.mapPartition(new MyMapPartition).withBroadcastSet(inp, "concepts").groupBy(0).reduceGroup(new MyReduceGroup)
(result,result)
}
end.count()
But, if I try to write partial results within iteration (_.writeAsText()) or any action, I will get error:
org.apache.flink.api.common.InvalidProgramException: A data set that is part of an iteration was used as a sink or action. Did you forget to close the iteration?
The alternative without bulk iteration seems to be below:
var start = env.fromElements((0, BitSet.empty))
var count = 1L
var all = count
while (count > 0){
start = ObjData.mapPartition(new MyMapPartition).withBroadcastSet(start, "concepts").groupBy(0).reduceGroup(new MyReduceGroup)
count = start.count()
all = all + count
}
println("total nodes: " + all)
But this approach is exceptionally slow on smallest input data, iteration version takes <30 seconds and loop version takes >3 minutes.
I guess flink is not able to create optimal plan to execute the loop.
Any workaround I should try? Is some modification to flink is possible to be able to save partial results on hadoop etc.?
Unfortunately, it is not currently possible to output intermediate results from a bulk iteration. You can only output the final result at the end of the iteration.
Also, as you correctly noticed, Flink cannot efficiently unroll a while-loop or for-loop, so that won't work either.
If your intermediate results are not that big, one thing you can try is appending your intermediate results in the partial solution and then output everything in the end of the iteration. A similar approach is implemented in the TransitiveClosureNaive example, where paths discovered in an iteration are accumulated in the next partial solution.
I am new to julia and to get started I wanted to port some numpy code to julia and hoped to get some nice performance increase. So far not to my satisfaction.
This is the function I want to compute
function s(x_list, r_list)
result_list = zeros(size(x_list,1))
for i = 1:size(x_list,1)
dotprods = r_list * x_list[i,:]'
expcall = exp(im * dotprods)
sumprod = sum(expcall) * sum(conj(expcall))
result_list[i] = sumprod
end
return result_list
end
with data input that looks like
v = rand(3)
r = rand(6000,3)
x = linspace(1.0, 2.0, 300) * (v./sqrt(sumabs2(v)))'
for this function and the given input, #time s(x,r) gives me
0.110619 seconds (3.60 k allocations: 96.256 MB, 8.47% gc time)
For this case, numpy does the same job in ~70ms, so I'm not very happy! Now if I do a #parallel for loop with julia -p 2:
function s(x_list, r_list)
result_list = SharedArray(Float64, size(x_list,1))
#parallel for i = 1:size(x_list,1)
dotprods = r_list * x_list[i,:]'
expcall = exp(im * dotprods)
sumprod = sum(expcall) * sum(conj(expcall))
result_list[i] = sumprod
end
return result_list
end
the problem is that
result_list[i] = sumprod
doesn't get updated and I get the list of zeros returned from the array initialization. What am I doing wrong here?
Further attempts to increase speed also did not show any benefit, e.g.
#vectorize_2arg Array{Float64,2} s
and declaring types
function s{T<:Float64}(x_list::Array{T,2}, r_list::Array{T,2})
But now, starting the same #parallel for loop in a session with just one thread (no -p2, just julia) the array does get updated and #time s(x,r) tells me
0.000040 seconds (36 allocations: 4.047 KB)
which is actually impossible for the function and input given! Is this a bug?
Any help is very appreciated!
Julia's #parallel macro does a distributed for loop: it copies all the data to other processes and does computations on each of them, reducing over the results and returning that result. The processes do not share memory – and may even be on other machines altogether. Your original data is never touched because each worker is modifying its own copy of that data. You may be thinking of threads, which is a currently-experimental feature that Julia will be adding in the future.
One problem is that you're not waiting for the #parallel call to complete. From the docs:
...the reduction operator can be omitted if it is not needed. In that case, the loop executes asynchronously, i.e. it spawns independent tasks on all available workers and returns an array of Future immediately without waiting for completion. The caller can wait for the Future completions at a later point by calling fetch() on them, or wait for completion at the end of the loop by prefixing it with #sync, like #sync #parallel for.
Try prefixing for loop with #sync
So I'm trying to iterate over the list of partitions of something, say 1:n for some n between 13 and 21. The code that I ideally want to run looks something like this:
valid_num = #parallel (+) for p in partitions(1:n)
int(is_valid(p))
end
println(valid_num)
This would use the #parallel for to map-reduce my problem. For example, compare this to the example in the Julia documentation:
nheads = #parallel (+) for i=1:200000000
Int(rand(Bool))
end
However, if I try my adaptation of the loop, I get the following error:
ERROR: `getindex` has no method matching getindex(::SetPartitions{UnitRange{Int64}}, ::Int64)
in anonymous at no file:1433
in anonymous at multi.jl:1279
in run_work_thunk at multi.jl:621
in run_work_thunk at multi.jl:630
in anonymous at task.jl:6
which I think is because I am trying to iterate over something that is not of the form 1:n (EDIT: I think it's because you cannot call p[3] if p=partitions(1:n)).
I've tried using pmap to solve this, but because the number of partitions can get really big, really quickly (there are more than 2.5 million partitions of 1:13, and when I get to 1:21 things will be huge), constructing such a large array becomes an issue. I left it running over night and it still didn't finish.
Does anyone have any advice for how I can efficiently do this in Julia? I have access to a ~30 core computer and my task seems easily parallelizable, so I would be really grateful if anyone knows a good way to do this in Julia.
Thank you so much!
The below code gives 511, the number of partitions of size 2 of a set of 10.
using Iterators
s = [1,2,3,4,5,6,7,8,9,10]
is_valid(p) = length(p)==2
valid_num = #parallel (+) for i = 1:30
sum(map(is_valid, takenth(chain(1:29,drop(partitions(s), i-1)), 30)))
end
This solution combines the takenth, drop, and chain iterators to get the same effect as the take_every iterator below under PREVIOUS ANSWER. Note that in this solution, every process must compute every partition. However, because each process uses a different argument to drop, no two processes will ever call is_valid on the same partition.
Unless you want to do a lot of math to figure out how to actually skip partitions, there is no way to avoid computing partitions sequentially on at least one process. I think Simon's answer does this on one process and distributes the partitions. Mine asks each worker process to compute the partitions itself, which means the computation is being duplicated. However, it is being duplicated in parallel, which (if you actually have 30 processors) will not cost you time.
Here is a resource on how iterators over partitions are actually computed: http://www.informatik.uni-ulm.de/ni/Lehre/WS03/DMM/Software/partitions.pdf.
PREVIOUS ANSWER (More complicated than necessary)
I noticed Simon's answer while writing mine. Our solutions seem similar to me, except mine uses iterators to avoid storing partitions in memory. I'm not sure which would actually be faster for what size sets, but I figure it's good to have both options. Assuming it takes you significantly longer to compute is_valid than to compute the partitions themselves, you can do something like this:
s = [1,2,3,4]
is_valid(p) = length(p)==2
valid_num = #parallel (+) for i = 1:30
foldl((x,y)->(x + int(is_valid(y))), 0, take_every(partitions(s), i-1, 30))
end
which gives me 7, the number of partitions of size 2 for a set of 4. The take_every function returns an iterator that returns every 30th partition starting with the ith. Here is the code for that:
import Base: start, done, next
immutable TakeEvery{Itr}
itr::Itr
start::Any
value::Any
flag::Bool
skip::Int64
end
function take_every(itr, offset, skip)
value, state = Nothing, start(itr)
for i = 1:(offset+1)
if done(itr, state)
return TakeEvery(itr, state, value, false, skip)
end
value, state = next(itr, state)
end
if done(itr, state)
TakeEvery(itr, state, value, true, skip)
else
TakeEvery(itr, state, value, false, skip)
end
end
function start{Itr}(itr::TakeEvery{Itr})
itr.value, itr.start, itr.flag
end
function next{Itr}(itr::TakeEvery{Itr}, state)
value, state_, flag = state
for i=1:itr.skip
if done(itr.itr, state_)
return state[1], (value, state_, false)
end
value, state_ = next(itr.itr, state_)
end
if done(itr.itr, state_)
state[1], (value, state_, !flag)
else
state[1], (value, state_, false)
end
end
function done{Itr}(itr::TakeEvery{Itr}, state)
done(itr.itr, state[2]) && !state[3]
end
One approach would be to divide the problem up into pieces that are not too big to realize and then process the items within each piece in parallel, e.g. as follows:
function my_take(iter,state,n)
i = n
arr = Array[]
while !done(iter,state) && (i>0)
a,state = next(iter,state)
push!(arr,a)
i = i-1
end
return arr, state
end
function get_part(npart,npar)
valid_num = 0
p = partitions(1:npart)
s = start(p)
while !done(p,s)
arr,s = my_take(p,s,npar)
valid_num += #parallel (+) for a in arr
length(a)
end
end
return valid_num
end
valid_num = #time get_part(10,30)
I was going to use the take() method to realize up to npar items from the iterator but take() appears to be deprecated so I've included my own implementation which I've called my_take(). The getPart() function therefore uses my_take() to obtain up to npar partitions at a time and carry out a calculation on them. In this case, the calculation just adds up their lengths, because I don't have the code for the OP's is_valid() function. get_part() then returns the result.
Because the length() calculation isn't very time-consuming, this code is actually slower when run on parallel processors than it is on a single processor:
$ julia -p 1 parpart.jl
elapsed time: 10.708567515 seconds (373025568 bytes allocated, 6.79% gc time)
$ julia -p 2 parpart.jl
elapsed time: 15.70633439 seconds (548394872 bytes allocated, 9.14% gc time)
Alternatively, pmap() could be used on each piece of the problem instead of the parallel for loop.
With respect to the memory issue, realizing 30 items from partitions(1:10) took nearly 1 gigabyte of memory on my PC when I ran Julia with 4 worker processes so I expect realizing even a small subset of partitions(1:21) will require a great deal of memory. It may be desirable to estimate how much memory would be needed to see if it would be at all possible before trying such a computation.
With respect to the computation time, note that:
julia> length(partitions(1:10))
115975
julia> length(partitions(1:21))
474869816156751
... so even efficient parallel processing on 30 cores might not be enough to make the larger problem solvable in a reasonable time.
I'm trying to improve my Spark app code understanding "collect", and I'm dealing with this code:
val triple = logData.map(x => x.split('#'))
.map(x => (x(1),x(0),x(2)))
.collect()
.sortBy(x => (x._1,x._2))
val idx = sc.parallelize(triple)
Basically I'm creating a [String,String,String] RDD with an unneccesary (imho) collect/parallelize step (200k elements in the original RDD).
The Spark guide says: "Collect: Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data."
BTW: 200k is sufficiently small?
I feel that this code should be "lighter" (with no collect-parallelize):
val triple = logData.map(x => x.split('#'))
.map(x => (x(1),x(0),x(2)))
.sortBy(x => (x._1,x._2))
val idx = triple
But after having runned (local not distributed) the same app many times, I always get faster times with the first code which in my opinion is doing an extra job (first collect then parallelize).
The entire app (not just this code snippet) takes on average 48 seconds in the first case, and at least 52 seconds in the second case.
How is this possible?
Thanks in advance
I think it is because the dataset is too small, in the later case you suffered the scheduling of shuffle to do the sort which could be faster when operating locally. when your dataset grows, it may even not possible to collect into driver.