I have a table (natomr) with 200 records which defines different areas. I want to find out what area(s) that contains an arbitrary point. This is my SQL:
SELECT *
FROM natomr
WHERE ST_DWithin(the_geom4326,
ST_geomfromtext('POINT(13.614807 59.684035)', 4326)::geography, 1)
This query takes about 1200 ms, which I assume is way too long for such small table.
I have created an index for the_geom4326, like this:
CREATE INDEX natomr_the_geom4326_gist
ON natomr
USING gist
(the_geom4326 );
I have also run VACUUM FULL command, but that did not have any effect.
What should I do to speed up the query?
Hard to tell if this is unexpected or not from what you have here...1200MS might be expected.
Auto vacuum prevents data wrap...shouldn't have a speed effect on a table this small
Table is almost too small for the index to really be effective.
Some potentials:
st_dwithin has a certain amount of overhead associated with it...it is composed of 3 calls of two other functions that are entirely contrib library files (in C). So your run time is going to look something like overhead + x seconds per record processed. Try scaling your data up a bit...try 10 points in a single query. This will give you a better idea of the overhead associated with st_dwithin.
How big are the polygons in the shape files? As an interesting test, try defining a 5 point polygon and attempt do the query to find a point in that polygon. Now define a 2000 point polygon and try the same test. If your 200 polygons here are 2000 points and larger, 1200MS doesn't sound too unreasonable depending on the power of your machine.
Related
I am interested in computing the fft of the first rows of a matrix, but I do not know in advance how many rows I need. I need to do this repeatedly but the number of rows I need to transform can change.
I will illustrate with the following example. Suppose I have a 100 by 128 array. If I plan for 1-dimensional fft's on each row, FFTW produces the following plan:
(dft-ct-dit/8
(dftw-direct-8/28-x100 "t2fv_8_sse2")
(dft-vrank>=1-x8/1
(dft-direct-16-x100 "n1fv_16_sse2")))
Although I don't fully understand this output, I do see the key ingredients: 1) This is a single Cooley-Tucker pass, note that 8*16=128. 2) Because of the x100 postfix on two lines, the plan states that this needs to happen for 100 rows.
I see three possibilities:
One-size-fits-all planning: plan for the 100 by 128 array, and execute this big plan even when only the first (say) 20 rows are needed.
Pros: we need only one plan so there is little planning overhead. Cons: potentially substantial performance loss in the execution phase (transforming more than I need).
Exhaustive planning: obtain plans using the same input/output array but for a all possible number of rows. In the example I would have 100 plans, where plan i carries out the fft for each of the first i rows. Pros: Transforming exactly what I need. Cons: Experiments show that I have to pay the planning penalty over and over, even though say for i=50 the plan will be the same as above but with x50 instead of x100. (I suppose there is no guarantee this will indeed be the plan identified by the FFTW planner, but I wouldn't mind losing "optimality" if I can cut out the planning time.)
Single-row planning: plan for a single row and use a loop to move data into input, transform, and move data out of output. Pros: I'm transforming exactly what I need. Cons: it seems to me this removes a lot of the FFTW optimizations, for instance when I use multiple threads. (I'm generally confused how multithreading works in FFTW since it is ill-documented... I know threading information is part of the plan, but printing the plan doesn't display any of it. This is off-topic though.)
I was thinking that I would combine all three ideas by first creating the one-size-fits-all plan, modifying this plan 99 times in a for loop instead of planning for the different sizes, and executing as under the exhaustive-planning approach. However I can't find any documentation on the plan/wisdom format, the output of wisdom with hexadecimal numbers is impenetrable. So I am wondering how I can carry out this hybrid approach.
I am using mongodb and I need to update my documents say total 1000 are there. My document has a basic structure like:
{
People:[1,2,3,4],
Place:"Auckland"
Event:"Music Show"
}
I have 10,000 threads running concurrently in another VM. Each thread looks for these documents(1000), see if these 1000 documents matches the query and push a number in People array . Suppose if thread 100 found say 500 out of these 1000 documents relevant, then it pushes the number 100 in People array of all the 500 documents.
For this,
I am using for each thread(10000) the command
update.append("$push",new BasicDBObject("People",serial_number));
updateMulti(query,update);
I observe poor performance for these in-place updates (multi-query).
Is this a problem due to a write lock?
Every thread(10000) updates the document that is relevant to the query ? - so there seems to be a lot of "waiting"
Is there a more efficient way to do these "push" operations?
Is "UpdateMulti" the right way to approach this?
Thank you for a great response - Editing and Adding some more information
Some design background :
Yes your reading of our problem is correct. We have 10000 threads each representing one "actor" updating upto 1000 entities ( based on the appropriate query ) at a time with a $push .
Inverting the model leads us to a few broken usecases ( from our domain perspective ) leading us to joins across "states" of the primary entity ( which will now be spread across many collections ) - ex: each of these actions is a state change for that entity - E has states ( e1,e2,e3,e4,e5 ) - So e1 to e5 is represented as an aggregate array which gets updated by the 10,000 threads/processes which represent actions of external apps.
We need close to real-time aggregation as another set of "actors" look at these "states" of e1 to e5 and then respond appropriately via another channel to the "elements in the array".
What should be the "ideal" design strategy in such a case - to speed up the writes.
Will sharding help - is there a "magnitude" heuristic for this - at what lock% should we shard etc..
This is a problem because of your schema design.
It is extremely inefficient to $push multiple values to multiple documents, especially from multiple threads. It's not so much that the write lock is the problem, it's that your design made it the problem. In addition, you are continuously growing documents which means that the updates are not "in place" and your collection is quickly getting fragmented.
It seems like your schema is "upside down". You have 10,000 threads looking to add numbers representing people (I assume a very large number of people) to a small number of documents (1000) which will grow to be huge. It seems to me that if you want to embed something in something else, you might consider collections representing people and then embedding events that those people are found at - at least then you are limiting the size of the array for each person to 1,000 at most, and the updates will be spread across a much larger number of documents, reducing contention significantly.
Another option is simply to record the event/person in attendance and then do aggregation over the raw data later, but without knowing exactly what your requirements for this application are, it's hard to know which way will produce the best results - the way you have picked is definitely one that's unlikely to give you good performance.
I'm attempting to estimate the total amount of results for app engine queries that will return large amounts of results.
In order to do this, I assigned a random floating point number between 0 and 1 to every entity. Then I executed the query for which I wanted to estimate the total results with the following 3 settings:
* I ordered by the random numbers that I had assigned in ascending order
* I set the offset to 1000
* I fetched only one entity
I then plugged the entities's random value that I had assigned for this purpose into the following equation to estimate the total results (since I used 1000 as the offset above, the value of OFFSET would be 1000 in this case):
1 / RANDOM * OFFSET
The idea is that since each entity has a random number assigned to it, and I am sorting by that random number, the entity's random number assignment should be proportionate to the beginning and end of the results with respect to its offset (in this case, 1000).
The problem I am having is that the results I am getting are giving me low estimates. And the estimates are lower, the lower the offset. I had anticipated that the lower the offset that I used, the less accurate the estimate should be, but I thought that the margin of error would be both above and below the actual number of results.
Below is a chart demonstrating what I am talking about. As you can see, the predictions get more consistent (accurate) as the offset increases from 1000 to 5000. But then the predictions predictably follow a 4 part polynomial. (y = -5E-15x4 + 7E-10x3 - 3E-05x2 + 0.3781x + 51608).
Am I making a mistake here, or does the standard python random number generator not distribute numbers evenly enough for this purpose?
Thanks!
Edit:
It turns out that this problem is due to my mistake. In another part of the program, I was grabbing entities from the beginning of the series, doing an operation, then re-assigning the random number. This resulted in a denser distribution of random numbers towards the end.
I did a little more digging into this concept, fixed the problem, and tried it again on a different query (so the number of results are different from above). I found that this idea can be used to estimate the total results for a query. One thing of note is that the "error" is very similar for offsets that are close by. When I did a scatter chart in excel, I expected the accuracy of the predictions at each offset to "cloud". Meaning that offsets at the very begging would produce a larger, less dense cloud that would converge to a very tiny, dense could around the actual value as the offsets got larger. This is not what happened as you can see below in the cart of how far off the predictions were at each offset. Where I thought there would be a cloud of dots, there is a line instead.
This is a chart of the maximum after each offset. For example the maximum error for any offset after 10000 was less than 1%:
When using GAE it makes a lot more sense not to try to do large amounts work on reads - it's built and optimized for very fast requests turnarounds. In this case it's actually more efficent to maintain a count of your results as and when you create the entities.
If you have a standard query, this is fairly easy - just use a sharded counter when creating the entities. You can seed this using a map reduce job to get the initial count.
If you have queries that might be dynamic, this is more difficult. If you know the range of possible queries that you might perform, you'd want to create a counter for each query that might run.
If the range of possible queries is infinite, you might want to think of aggregating counters or using them in more creative ways.
If you tell us the query you're trying to run, there might be someone who has a better idea.
Some quick thought:
Have you tried Datastore Statistics API? It may provide a fast and accurate results if you won't update your entities set very frequently.
http://code.google.com/appengine/docs/python/datastore/stats.html
[EDIT1.]
I did some math things, I think the estimate method you purposed here, could be rephrased as an "Order statistic" problem.
http://en.wikipedia.org/wiki/Order_statistic#The_order_statistics_of_the_uniform_distribution
For example:
If the actual entities number is 60000, the question equals to "what's the probability that your 1000th [2000th, 3000th, .... ] sample falling in the interval [l,u]; therefore, the estimated total entities number based on this sample, will have an acceptable error to 60000."
If the acceptable error is 5%, the interval [l, u] will be [0.015873015873015872, 0.017543859649122806]
I think the probability won't be very large.
This doesn't directly deal with the calculations aspect of your question, but would using the count attribute of a query object work for you? Or have you tried that out and it's not suitable? As per the docs, it's only slightly faster than retrieving all of the data, but on the plus side it would give you the actual number of results.
http://code.google.com/appengine/docs/python/datastore/queryclass.html#Query_count
Let's say I have two fairly large data sets - the first is called "Base" and it contains 200 million tab delimited rows and the second is call "MatchSet" which has 10 million tab delimited rows of similar data.
Let's say I then also have an arbitrary function called Match(row1, row2) and Match() essentially contains some heuristics for looking at row1 (from MatchSet) and comparing it to row2 (from Base) and determining if they are similar in some way.
Let's say the rules implemented in Match() are custom and complex rules, aka not a simple string match, involving some proprietary methods. Let's say for now Match(row1,row2) is written in psuedo-code so implementation in another language is not a problem (though it's in C++ today).
In a linear model, aka program running on one giant processor - we would read each line from MatchSet and each line from Base and compare one to the other using Match() and write out our match stats. For example we might capture: X records from MatchSet are strong matches, Y records from MatchSet are weak matches, Z records from MatchSet do not match. We would also write the strong/weak/non values to separate files for inspection. Aka, a nested loop of sorts:
for each row1 in MatchSet
{
for each row2 in Base
{
var type = Match(row1,row2);
switch(type)
{
//do something based on type
}
}
}
I've started considering Hadoop streaming as a method for running these comparisons as a batch job in a short amount of time. However, I'm having a bit of a hardtime getting my head around the map-reduce paradigm for this type of problem.
I understand pretty clearly at this point how to take a single input from hadoop, crunch the data using a mapping function and then emit the results to reduce. However, the "nested-loop" approach of comparing two sets of records is messing with me a bit.
The closest I'm coming to a solution is that I would basically still have to do a 10 million record compare in parallel across the 200 million records so 200 million/n nodes * 10 million iterations per node. Is that that most efficient way to do this?
From your description, it seems to me that your problem can be arbitrarily complex and could be a victim of the curse of dimensionality.
Imagine for example that your rows represent n-dimensional vectors, and that your matching function is "strong", "weak" or "no match" based on the Euclidean distance between a Base vector and a MatchSet vector. There are great techniques to solve these problems with a trade-off between speed, memory and the quality of the approximate answers. Critically, these techniques typically come with known bounds on time and space, and the probability to find a point within some distance around a given MatchSet prototype, all depending on some parameters of the algorithm.
Rather than for me to ramble about it here, please consider reading the following:
Locality Sensitive Hashing
The first few hits on Google Scholar when you search for "locality sensitive hashing map reduce". In particular, I remember reading [Das, Abhinandan S., et al. "Google news personalization: scalable online collaborative filtering." Proceedings of the 16th international conference on World Wide Web. ACM, 2007] with interest.
Now, on the other hand if you can devise a scheme that is directly amenable to some form of hashing, then you can easily produce a key for each record with such a hash (or even a small number of possible hash keys, one of which would match the query "Base" data), and the problem becomes a simple large(-ish) scale join. (I say "largish" because joining 200M rows with 10M rows is quite a small if the problem is indeed a join). As an example, consider the way CDDB computes the 32-bit ID for any music CD CDDB1 calculation. Sometimes, a given title may yield slightly different IDs (i.e. different CDs of the same title, or even the same CD read several times). But by and large there is a small set of distinct IDs for that title. At the cost of a small replication of the MatchSet, in that case you can get very fast search results.
Check the Section 3.5 - Relational Joins in the paper 'Data-Intensive Text Processing
with MapReduce'. I haven't gone in detail, but it might help you.
This is an old question, but your proposed solution is correct assuming that your single stream job does 200M * 10M Match() computations. By doing N batches of (200M / N) * 10M computations, you've achieved a factor of N speedup. By doing the computations in the map phase and then thresholding and steering the results to Strong/Weak/No Match reducers, you can gather the results for output to separate files.
If additional optimizations could be utilized, they'd like apply to both the single stream and parallel versions. Examples include blocking so that you need to do fewer than 200M * 10M computations or precomputing constant portions of the algorithm for the 10M match set.
I have a query where I need to modify the selected data and I want to limit my results of that data. For instance:
SELECT table_id, radians( 25 ) AS rad FROM test_table WHERE rad < 5 ORDER BY rad ASC;
Where this gets hung up is the 'rad < 5', because according to codeigniter there is no 'rad' column. I've tried writing this as a custom query ($this->db->query(...)) but even that won't let me. I need to restrict my results based on this field. Oh, and the ORDER BY works perfect if I remove the WHERE filter. The results are order ASC by the rad field.
HELP!!!
With many DBMSes, we need to repeat the formula / expression in the where clause, i.e.
SELECT table_id, radians( 25 ) AS rad
FROM test_table
WHERE radians( 25 ) < 5
ORDER BY radians( 25 ) ASC
However, in this case, since the calculated column is a constant, the query itself doesn't make much sense. Was there maybe a missing part, as in say radians (25 * myColumn) or something like that ?
Edit (following info about true nature of formula etc.)
You seem disappointed, because the formula needs to be repeated... A few comments on that:
The fact that the formula needs to be explicitly spelled-out rather than aliased may make the query less readable, less fun to write etc. (more on this below), but the more important factor to consider is that the formula being used in the WHERE clause causes the DBMS to calculate this value for potentially all of the records in the underlying table!!!
This in turns hurts performance in several ways:
SQL may not be able use some indexes, and instead have to scan the table (or parts thereof)
if the formula is heavy, it both makes for slow response and for a less scalable server
The situation is not quite as bad if additional predicates in the WHERE clause allow SQL to filter out [a significant amount of] records that would otherwise be processed. Such additional search criteria may be driven by the application (for example in addition to this condition on radiant, the [unrelated] altitude of the location is required to be below 6,000 ft), or such criteria may be added "artificially" to help with the query (for example you may know of a rough heuristic which is insufficient to calculate the "radian" value within acceptable precision, but may yet be good enough to filter-out 70% of the records, only keeping these which have a chance of satisfying the exact range desired for the "radian".
Now a few tricks regarding the formula itself, in an attempt to make it faster:
remember that you may not need to run 100% of the textbook formula.
I'm not sure which part of the great circle math is relevant to this radian calculation, but speaking in generic terms, some formulas include an expensive step, such as a square root extraction, a call to a trig function etc. In some cases it may be possible to simplify the formula (which has to be run for many records/values), by applying the reverse step to the other side of the predicate (which, typically, only needs to be evaluated once). For example if say the search condition predicate is "WHERE SQRT((x1-x2)^2 + (y1-y2)^2) > 5". Since the calculation of distance involves finding the square root (of the sum of the squared differences), one may decide to remove the square root and instead compare the results of this modified formula with the square of the distance value origninally, i.e. "WHERE ((x1-x2)^2 + (y1-y2)^2) > (5^2)"
Depending on your SQL/DBMS system, it may be possible to implement the formula in a custom-defined function, which would make it both more efficient (because "pre-compiled", maybe written in a better language etc.) and shorter to reference, in the SQL query itself (event though it would require being listed twice, as said)
Depending on situation, it may also be possible to alter the database schema and underlying application, to have the formula (or parts thereof) but pre-computed, and indexed, saving the DBMS this lengthy resolution of the function-based predicate.