Can you tell me if following code is safe and there is no risk that some documents may be missed?
The idea behind the code is to process multiple data sets (obtained for multiple queries), but spend at most n seconds in each collection.
Example: for two sets of data [1,2,3,4], [A,B,C,D], lets say each item processing time is 5 seconds and took_too_much_time returns true every 10 secs, I want to process 1,2, then A,B, then 3,4, then C,D.
On a small amount of data everything is working well but after switching to bigger set (tens of queries and thousands of documents), I'm missing some documents. Would be really great if ou could confirm or deny that below code may be the reason.
cursors = []
queries.each do |query|
collection.find(query, {timeout: false}) { |c| cursors << c }
end
while(cursors.any?)
cursors.each do |c|
c.each do |document|
process_document(document)
break if took_too_much_time
end
end
cursors = remove_empty_cursors(cursors)
end
Related
I am taking some sentences in an array and some keywords in queries to check whether the keywords are present in sentences. For small sentences arrays it works fine but for huge array sentences it gets timeout everytime. Any idea on how to optimise this. TIA
def textQueries(sentences, queries)
queries.map { |query|
index_arr = []
sentences.map.with_index { |sentence, index|
sentence_arr = sentence.split(' ')
if query.split(' ').all? { |qur| sentence_arr.include?(qur) }
index_arr << index
end
}
index_arr << -1 if index_arr.empty?
puts index_arr.join " "
}
end
Example inputs :
**Sentences**:
it go will away
go do art
what to will east
**Queries**
it will
go east will
will
**Expected Result**
0
-1
0 2
There are a few optimizations that I see at first glance:
You are currently splitting each sentence for every query. Your sample data has 3 sentences and 3 queries. This means each sentence is split 3 times (once of each query). Since the result doesn't depend on the query you should do this up front. Each sentence should only be split once.
You are currently using sentences.map to iterate sentences, but don't capture the result. You are only using it for iteration purposes and push results to the index_arr. map creates a new array which you don't use, meaning you are chewing up memory that could be used elsewhere. This could be changed to each which is far more efficient if you don't use the return value.
The code query.split(' ').all? { |qur| sentence_arr.include?(qur) } isn't really optimal, since it starts searching for a specific word from the front of sentence_arr each time. Checking if a certain collection is a subset or superset of another collection is something where Set often shines.
With all the above in mind something like this should be a lot faster:
require 'set'
def text_queries(sentences, queries)
sentences = sentences.map { |sentence| Set.new(sentence.split(' ')) }
queries.map do |query|
query = Set.new(query.split(' '))
indexes = sentences.each_index.select { |index| sentences[index] >= query }
indexes << -1 if indexes.empty?
indexes
end
end
Note: If you decide to output the values to the console (like shown in the question):
puts indexes.join(' ')
Then there is no reason to use queries.map since an array with nil values will be returned (puts always returns nil). Change the map to each in this scenario.
So I'm trying to iterate over the list of partitions of something, say 1:n for some n between 13 and 21. The code that I ideally want to run looks something like this:
valid_num = #parallel (+) for p in partitions(1:n)
int(is_valid(p))
end
println(valid_num)
This would use the #parallel for to map-reduce my problem. For example, compare this to the example in the Julia documentation:
nheads = #parallel (+) for i=1:200000000
Int(rand(Bool))
end
However, if I try my adaptation of the loop, I get the following error:
ERROR: `getindex` has no method matching getindex(::SetPartitions{UnitRange{Int64}}, ::Int64)
in anonymous at no file:1433
in anonymous at multi.jl:1279
in run_work_thunk at multi.jl:621
in run_work_thunk at multi.jl:630
in anonymous at task.jl:6
which I think is because I am trying to iterate over something that is not of the form 1:n (EDIT: I think it's because you cannot call p[3] if p=partitions(1:n)).
I've tried using pmap to solve this, but because the number of partitions can get really big, really quickly (there are more than 2.5 million partitions of 1:13, and when I get to 1:21 things will be huge), constructing such a large array becomes an issue. I left it running over night and it still didn't finish.
Does anyone have any advice for how I can efficiently do this in Julia? I have access to a ~30 core computer and my task seems easily parallelizable, so I would be really grateful if anyone knows a good way to do this in Julia.
Thank you so much!
The below code gives 511, the number of partitions of size 2 of a set of 10.
using Iterators
s = [1,2,3,4,5,6,7,8,9,10]
is_valid(p) = length(p)==2
valid_num = #parallel (+) for i = 1:30
sum(map(is_valid, takenth(chain(1:29,drop(partitions(s), i-1)), 30)))
end
This solution combines the takenth, drop, and chain iterators to get the same effect as the take_every iterator below under PREVIOUS ANSWER. Note that in this solution, every process must compute every partition. However, because each process uses a different argument to drop, no two processes will ever call is_valid on the same partition.
Unless you want to do a lot of math to figure out how to actually skip partitions, there is no way to avoid computing partitions sequentially on at least one process. I think Simon's answer does this on one process and distributes the partitions. Mine asks each worker process to compute the partitions itself, which means the computation is being duplicated. However, it is being duplicated in parallel, which (if you actually have 30 processors) will not cost you time.
Here is a resource on how iterators over partitions are actually computed: http://www.informatik.uni-ulm.de/ni/Lehre/WS03/DMM/Software/partitions.pdf.
PREVIOUS ANSWER (More complicated than necessary)
I noticed Simon's answer while writing mine. Our solutions seem similar to me, except mine uses iterators to avoid storing partitions in memory. I'm not sure which would actually be faster for what size sets, but I figure it's good to have both options. Assuming it takes you significantly longer to compute is_valid than to compute the partitions themselves, you can do something like this:
s = [1,2,3,4]
is_valid(p) = length(p)==2
valid_num = #parallel (+) for i = 1:30
foldl((x,y)->(x + int(is_valid(y))), 0, take_every(partitions(s), i-1, 30))
end
which gives me 7, the number of partitions of size 2 for a set of 4. The take_every function returns an iterator that returns every 30th partition starting with the ith. Here is the code for that:
import Base: start, done, next
immutable TakeEvery{Itr}
itr::Itr
start::Any
value::Any
flag::Bool
skip::Int64
end
function take_every(itr, offset, skip)
value, state = Nothing, start(itr)
for i = 1:(offset+1)
if done(itr, state)
return TakeEvery(itr, state, value, false, skip)
end
value, state = next(itr, state)
end
if done(itr, state)
TakeEvery(itr, state, value, true, skip)
else
TakeEvery(itr, state, value, false, skip)
end
end
function start{Itr}(itr::TakeEvery{Itr})
itr.value, itr.start, itr.flag
end
function next{Itr}(itr::TakeEvery{Itr}, state)
value, state_, flag = state
for i=1:itr.skip
if done(itr.itr, state_)
return state[1], (value, state_, false)
end
value, state_ = next(itr.itr, state_)
end
if done(itr.itr, state_)
state[1], (value, state_, !flag)
else
state[1], (value, state_, false)
end
end
function done{Itr}(itr::TakeEvery{Itr}, state)
done(itr.itr, state[2]) && !state[3]
end
One approach would be to divide the problem up into pieces that are not too big to realize and then process the items within each piece in parallel, e.g. as follows:
function my_take(iter,state,n)
i = n
arr = Array[]
while !done(iter,state) && (i>0)
a,state = next(iter,state)
push!(arr,a)
i = i-1
end
return arr, state
end
function get_part(npart,npar)
valid_num = 0
p = partitions(1:npart)
s = start(p)
while !done(p,s)
arr,s = my_take(p,s,npar)
valid_num += #parallel (+) for a in arr
length(a)
end
end
return valid_num
end
valid_num = #time get_part(10,30)
I was going to use the take() method to realize up to npar items from the iterator but take() appears to be deprecated so I've included my own implementation which I've called my_take(). The getPart() function therefore uses my_take() to obtain up to npar partitions at a time and carry out a calculation on them. In this case, the calculation just adds up their lengths, because I don't have the code for the OP's is_valid() function. get_part() then returns the result.
Because the length() calculation isn't very time-consuming, this code is actually slower when run on parallel processors than it is on a single processor:
$ julia -p 1 parpart.jl
elapsed time: 10.708567515 seconds (373025568 bytes allocated, 6.79% gc time)
$ julia -p 2 parpart.jl
elapsed time: 15.70633439 seconds (548394872 bytes allocated, 9.14% gc time)
Alternatively, pmap() could be used on each piece of the problem instead of the parallel for loop.
With respect to the memory issue, realizing 30 items from partitions(1:10) took nearly 1 gigabyte of memory on my PC when I ran Julia with 4 worker processes so I expect realizing even a small subset of partitions(1:21) will require a great deal of memory. It may be desirable to estimate how much memory would be needed to see if it would be at all possible before trying such a computation.
With respect to the computation time, note that:
julia> length(partitions(1:10))
115975
julia> length(partitions(1:21))
474869816156751
... so even efficient parallel processing on 30 cores might not be enough to make the larger problem solvable in a reasonable time.
I have a large vector of vectors of strings:
There are around 50,000 vectors of strings,
each of which contains 2-15 strings of length 1-20 characters.
MyScoringOperation is a function which operates on a vector of strings (the datum) and returns an array of 10100 scores (as Float64s). It takes about 0.01 seconds to run MyScoringOperation (depending on the length of the datum)
function MyScoringOperation(state:State, datum::Vector{String})
...
score::Vector{Float64} #Size of score = 10000
I have what amounts to a nested loop.
The outer loop typically would runs for 500 iterations
data::Vector{Vector{String}} = loaddata()
for ii in 1:500
score_total = zeros(10100)
for datum in data
score_total+=MyScoringOperation(datum)
end
end
On one computer, on a small test case of 3000 (rather than 50,000) this takes 100-300 seconds per outer loop.
I have 3 powerful servers with Julia 3.9 installed (and can get 3 more easily, and then can get hundreds more at the next scale).
I have basic experience with #parallel, however it seems like it is spending a lot of time copying the constant (It more or less hang on the smaller testing case)
That looks like:
data::Vector{Vector{String}} = loaddata()
state = init_state()
for ii in 1:500
score_total = #parallel(+) for datum in data
MyScoringOperation(state, datum)
end
state = update(state, score_total)
end
My understanding of the way this implementation works with #parallel is that it:
For Each ii:
partitions data into a chuck for each worker
sends that chuck to each worker
works all process there chunks
main procedure sums the results as they arrive.
I would like to remove step 2,
so that instead of sending a chunk of data to each worker,
I just send a range of indexes to each worker, and they look it up from their own copy of data. or even better, only giving each only their own chunk, and having them reuse it each time (saving on a lot of RAM).
Profiling backs up my belief about the functioning of #parellel.
For a similarly scoped problem (with even smaller data),
the non-parallel version runs in 0.09seconds,
and the parallel runs in
And the profiler shows almost all the time is spent 185 seconds.
Profiler shows almost 100% of this is spend interacting with network IO.
This should get you started:
function get_chunks(data::Vector, nchunks::Int)
base_len, remainder = divrem(length(data),nchunks)
chunk_len = fill(base_len,nchunks)
chunk_len[1:remainder]+=1 #remained will always be less than nchunks
function _it()
for ii in 1:nchunks
chunk_start = sum(chunk_len[1:ii-1])+1
chunk_end = chunk_start + chunk_len[ii] -1
chunk = data[chunk_start: chunk_end]
produce(chunk)
end
end
Task(_it)
end
function r_chunk_data(data::Vector)
all_chuncks = get_chunks(data, nworkers()) |> collect;
remote_chunks = [put!(RemoteRef(pid)::RemoteRef, all_chuncks[ii]) for (ii,pid) in enumerate(workers())]
#Have to add the type annotation sas otherwise it thinks that, RemoteRef(pid) might return a RemoteValue
end
function fetch_reduce(red_acc::Function, rem_results::Vector{RemoteRef})
total = nothing
#TODO: consider strongly wrapping total in a lock, when in 0.4, so that it is garenteed safe
#sync for rr in rem_results
function gather(rr)
res=fetch(rr)
if total===nothing
total=res
else
total=red_acc(total,res)
end
end
#async gather(rr)
end
total
end
function prechunked_mapreduce(r_chunks::Vector{RemoteRef}, map_fun::Function, red_acc::Function)
rem_results = map(r_chunks) do rchunk
function do_mapred()
#assert r_chunk.where==myid()
#pipe r_chunk |> fetch |> map(map_fun,_) |> reduce(red_acc, _)
end
remotecall(r_chunk.where,do_mapred)
end
#pipe rem_results|> convert(Vector{RemoteRef},_) |> fetch_reduce(red_acc, _)
end
rchunk_data breaks the data into chunks, (defined by get_chunks method) and sends those chunks each to a different worker, where they are stored in RemoteRefs.
The RemoteRefs are references to memory on your other proccesses(and potentially computers), that
prechunked_map_reduce does a variation on a kind of map reduce to have each worker first run map_fun on each of it's chucks elements, then reduce over all the elements in its chuck using red_acc (a reduction accumulator function). Finally each worker returns there result which is then combined by reducing them all together using red_acc this time using the fetch_reduce so that we can add the first ones completed first.
fetch_reduce is a nonblocking fetch and reduce operation. I believe it has no raceconditions, though this maybe because of a implementation detail in #async and #sync. When julia 0.4 comes out, it is easy enough to put a lock in to make it obviously have no race conditions.
This code isn't really battle hardened. I don;t believe the
You also might want to look at making the chuck size tunable, so that you can seen more data to faster workers (if some have better network or faster cpus)
You need to reexpress your code as a map-reduce problem, which doesn't look too hard.
Testing that with:
data = [float([eye(100),eye(100)])[:] for _ in 1:3000] #480Mb
chunk_data(:data, data)
#time prechunked_mapreduce(:data, mean, (+))
Took ~0.03 seconds, when distributed across 8 workers (none of them on the same machine as the launcher)
vs running just locally:
#time reduce(+,map(mean,data))
took ~0.06 seconds.
I want to get a bunch a XML and parse them. They are somewhat large.
So I was thinking I could get and parse them in a future like this:(I currently use Celluloid)
country_xml = {}
country_pool = GetAndParseXML.pool size: 4, args: [#connection]
countries.each do |country|
country_xml[country] = country_pool.future.fetch_xml country
end
countries.each do |country|
xml = country_xml[country]
# Do stuff with the XML!
end
This would be fine if it weren't that it takes up a lot of memory before it's actually needed.
Ideally I want it to maybe buffer up 3 XML files stop and wait until at least 1 is processed then continue. How would I do that?
The first question is: what is it that's taking up the memory? I will assume it's the prased XML documents, as that seems most likely to me.
I think the easiest way would be to create an actor that will fetch and process the XML. If you then create a pool of 3 of these actors you will have at most 3 requests being processed at once.
In vague terms (assuming that you aren't using the Celluloid registry):
class DoStuffWithCountryXml
include Celluloid
exclusive :do_stuff_with_country
def initialize(fetcher)
#fetcher = fetcher
end
def do_stuff_with_country(country)
country_xml = fetcher.fetch_xml country
# Do stuff with country_xml
end
end
country_pool = GetAndParseXML.pool size: 4, args: [#connection]
country_process_pool = DoStuffWithCountryXml.pool size: 3, args: [country_pool]
countries_futures = countries.map { |c| country_process_pool.future.do_stuff_with_country(c) }
countries_stuff = countries_futures.map { |f| f.value }
Note that if this is the only place where GetAndParseXML is used then the pool size might as well be the same as the DoStuffWithXmlActor.
I would not use a Pool at all. You're not benefiting from it. A lot of people seem to feel using a Future and a Pool together is a good idea, but it's usually worse than using one or the other.
In your case, use Future ... but you will also benefit from the upcoming Multiplexer features. Until then, do this... use a totally different strategy than has been tried or suggested:
class HandleXML
include Celluloid
def initialize(fetcher)
#fetcher = fetcher
end
def get_xml(country)
#fetcher.fetch_xml(country)
end
def process_xml(country, xml)
#de Do whatever you need to do with the data.
end
end
def begin_processor(handler, countries, index)
data = handler.future.get_xml(countries[index])
index += 1
data
end
limiter = 3 #de This sets your desired limit.
country_index = 0
data_index = 0
data = {}
processing = []
handler = HandleXML.new(#connection)
#de Load up your initial futures.
limiter.times {
processing << begin_processor(handler, countries, country_index)
}
while data_index < countries.length
data[countries[data_index]] = processor.shift.value
handler.process_xml(countries[data_index],data[countries[data_index]])
#de Once you've taken out one XML set above, load up another.
if country_index < countries.length
processing << begin_processor(handler, countries, country_index)
end
end
The above is just an example of how to do it with Future only, handling 3 at a time. I've not run it and it could have errors, but the idea is demonstrated for you.
The code loads up 3 sets of Country XML, then starts processing that XML. Once it has processed one set of XML, it loads up another, until all the country XML is processed.
I'm running some Ruby scripts concurrently using Grosser/Parallel.
During each concurrent test I want to add up the number of times a particular thing has happened, then display that number.
Let's say:
def main
$this_happened = 0
do_this_in_parallel
puts $this_happened
end
def do_this_in_parallel
Parallel.each(...) {
$this_happened += 1
}
end
The final value after do_this_in_parallel has finished will always be 0
I'd like to know why this happens.
How can I get the desired result which would be $this_happenend > 0?
Thanks.
This doesn't work because separate processes have separate memory spaces: setting variables etc in one process has no effect on what happens in the other process.
However you can return a result from your block (because under the hood parallel sets up pipes so that the processes can be fed input/return results). For example you could do this
counts = Parallel.map(...) do
#the return value of the block should
#be the number of times the event occurred
end
Then just sum the counts to get your total count (eg counts.reduce(:+)). You might also want to read up on map-reduce for more information about this way of parallelising work
I have never used parallel but the documentation seems to suggest that something like this might work.
Parallel.each(..., :finish => lambda {|*_| $this_happened += 1}) { do_work }