MongoDB Performance on MongoLab for Geospacial Queries - performance

I have a collection of places with more then 400,000 documents. I am trying to do geospacial queries but they always seem to timeout.
From the MongoLab interface I do a search:
{ "location": {"$near": [ 38, -122 ] } }
And the page just times out.
Also ran this command thru my console:
db.runCommand({geoNear: "places", near: [50,50], num:10})
And it did succeed but took something like 5 minutes to complete.
I do have a Geospatial Index on location.
location { "location" : "2d"}
Is it just impossible to do geospacial queries on such big collections (quite small for a MongoDB collection after all)?
EDIT: MongoLab personally contacted me regarding this problem. It seems there are some issues with my db such as many places not having any coords yet. Also I discovered that using maxDistance accelerates the queries dramatically, which brings me back to this morning's question here : so question

Mongolabs techs have pointed out to me that having a lot of longitude latiudes set to 0,0 and NOT using a maxDistance was what was slowing things down. Adding the maxDistance worked like a charm..
So thanks again to the guy's at Mongolabs.

Related

Get all open search (point in time) in Elasticsearch

To create a new point a time we do
POST /my-index-000001/_pit?keep_alive=1m
output:
{
"id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIgAAAAA=="
}
It will return a PIT ID.
Now to get the total number of open search, we do
GET /_nodes/stats/indices/search
and output has
"open_contexts" : 18,
As Elastic says that we should close/delete the PIT after the work is done. Close the Point in time
Now the question:
how can we get ids of all open search or PITs?
if we can not get the Ids, is there any way to close the search after a time or it is auto-closed after keep-alive time ended.
I am not able to find any documentation regarding this.
There's no way to do retrieve the IDs of the opened search contexts, you're supposed to keep track of them in your application code.
As the link you provided states:
Point-in-time is automatically closed when its keep_alive has been elapsed.
So in your case, if you don't issue a new query after 1 minute, then the search context will be automatically closed.

Elasticsearch giving cached result even after 5-6 seconds

My System is calling elasticsearch. After updating a document I would like to fetch the same document again. While doing so elasticsearch sometimes fetches cached results (results before the update) even after retrying the elasticsearch get after 5-6 seconds.
I have used refresh:'wait_for' while updating the document. Can anyone help me what can be a workaround for this? I would like to fetch the latest revision of the updated document. My query to fetch is:
body: {
query: {
terms: {
_id: [
idsToFetch
]
}
}
}
First, you can check whats the refresh interval set for your index defaults to 1 second, in this case: refresh:wait_for should return back in maximum 1 second but as explained in official ES documents :
If the refresh interval is set to -1, disabling the automatic
refreshes, then requests with refresh=wait_for will wait indefinitely
until some action causes a refresh. Conversely, setting
index.refresh_interval to something shorter than the default like
200ms will make refresh=wait_for come back faster, but it’ll still
generate inefficient segment
You can get the whats the refresh_interval set for index using https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html, please note it would come in the result only if it's not set to its default value.
Let me know if you face any issue or have more question.

Error 429 [type=reduce_search_phase_exception]

I have many languages for my docs and am following this pattern: One index per language. In that they suggest to search across all indices with the
/blogs-*/post/_count
pattern. For my case I am getting a count across the indices of how many docs I have. I am running my code concurrently so making many requests at same time. If I search
/blogs-en/post/_count
or any other language then all is fine. However if I search
/blogs-*/post/_count
I soon encounter:
"Error 429 (Too Many Requests): [reduce] [type=reduce_search_phase_exception]
"
Is there a workaround for this? The same number of requests is made regardless of if I use
/blogs-en/post/_count or /blogs-*/post/_count.
I have always used the same number of workers in my code but re-arranging the indices to have one index per language suddenly broke my code.
EDIT: It is a brand new index without any documents when I start the program and when I get the error I have about 5,000 documents so not under any heavy load.
Edit: I am using the mapping found in the above-referenced link and running on a local machine with all the defaults of ES...in my case shards=5 and replicas=1. I am really just following the example from the link.
EDIT: The errors are seen with as few as 13-20 requests are made and I know ES can handle more than that. Searching /blogs-en/post/_count instead of /blogs-*/post/_count, etc.. can easily handle thousands with no errors.
Another Edit: I have removed all concurrency but still can only access 40-50 requests before I get the error.
I don't get an error for that request and it returns total documents.
Is you'r cluster under load?
Anyway, using simple aggregation you can get total document count in hits.total and per index document count in count_per_index part of result:
GET /blogs-*/post/_search
{
"size": 0,
"query": {
"match_all": {}
},
"aggs": {
"count_per_index": {
"terms": {
"field": "_index"
}
}
}
}

Solr performance with commitWithin does not make sense

I am running a very simple performance experiment where I post 2000 documents to my application.
Who in tern persists them to a relational DB and sends them to Solr for indexing (Synchronously, in the same request).
I am testing 3 use cases:
No indexing at all - ~45 sec to post 2000 documents
Indexing included - commit after each add. ~8 minutes (!) to post and index 2000 documents
Indexing included - commitWithin 1ms ~55 seconds (!) to post and index 2000 documents
The 3rd result does not make any sense, I would expect the behavior to be similar to the one in point 2. At first I thought that the documents were not really committed but I could actually see them being added by executing some queries during the experiment (via the solr web UI).
I am worried that I am missing something very big. Is it possible that committing after each add will degrade performance by a factor of 400?!
The code I use for point 2:
SolrInputDocument = // get doc
SolrServer solrConnection = // get connection
solrConnection.add(doc);
solrConnection.commit();
Where as the code for point 3:
SolrInputDocument = // get doc
SolrServer solrConnection = // get connection
solrConnection.add(doc, 1); // According to API documentation I understand there is no need to call an explicit commit after this
According to this wiki:
https://wiki.apache.org/solr/NearRealtimeSearch
the commitWithin is a soft-commit by default. Soft-commits are very efficient in terms of making the added documents immediately searchable. But! They are not on the disk yet. That means the documents are being committed into RAM. In this setup you would use updateLog to be solr instance crash tolerant.
What you do in point 2 is hard-commit, i.e. flush the added documents to disk. Doing this after each document add is very expensive. So instead, post a bunch of documents and issue a hard commit or even have you autoCommit set to some reasonable value, like 10 min or 1 hour (depends on your user expectations).

can't reproduce/verify the performance claims in graph databases and neo4j in action books

UPDATE I have put up a follow up question that contains updated scripts and and a clearer setup on neo4j performance compared to mysql (how can it be improved?). Please continue there./UPDATE
I have some problems verifying the performance claims made in the "graph databases" book (page 20) and in the neo4j (chapter 1).
To verify these claims I created a sample dataset of 100000 'person' entries with 50 'friends' each, and tried to query for e.g. friends 4 hops away. I used the very same dataset in mysql. With friends of friends over 4 hops mysql returns in 0.93 secs, while neo4j needs 65 -75 secs (on repeated calls).
How can I improve this miserable outcome, and verify the claims made in the books?
A bit more detail:
I run the whole setup on a i5-3570K with 16GB Ram, using ubuntu12.04 64bit, java version "1.7.0_25" and mysql 5.5.31, neo4j-community-2.0.0-M03 (I get a similar outcome with 1.9)
All code/sample data can be found on https://github.com/jhb/neo4j-experiements/ (to be used with 2.0.0). The resulting sample data in different formats can be found on https://github.com/jhb/neo4j-testdata.
To use the scripts you need a python with mysql-python, requests and simplejson installed.
the dataset is created with friendsdata.py and stored to friends.pickle
friends.pickle gets imported to neo4j using import_friends_neo4j.py
friends.pickle gets imported to mysql using import_friends_mysql.py
I add indexes on t_user_friend.* in mysql
I added "create index on :node(noscenda_name) in neo4j
To make life a bit easier the friends.*.bz2 contain sql and cypher statements to create those datasets in mysql and neo4j 2.0 M3.
Mysql performance
I first warm mysql up by querying:
select count(distinct name) from t_user;
select count(distinct name) from t_user;
Then, for the real meassurment I do
python query_friends_mysql.py 4 10
This creates the following sql statement (with changing t_user.names):
select
count(*)
from
t_user,
t_user_friend as uf1,
t_user_friend as uf2,
t_user_friend as uf3,
t_user_friend as uf4
where
t_user.name='person8601' and
t_user.id = uf1.user_1 and
uf1.user_2 = uf2.user_1 and
uf2.user_2 = uf3.user_1 and
uf3.user_2 = uf4.user_1;
and repeats this 4 hop query 10 times. The queries need around 0.95 secs each. Mysql is configured to use a key_buffer of 4G.
neo4j performance testing
I have modified neo4j.properties:
neostore.nodestore.db.mapped_memory=25M
neostore.relationshipstore.db.mapped_memory=250M
and the neo4j-wrapper.conf:
wrapper.java.initmemory=2048
wrapper.java.maxmemory=8192
To warm up neo4j I do
start n=node(*) return count(n.noscenda_name);
start r=relationship(*) return count(r);
Then I start using the transactional http endpoint (but I get the same results using the neo4j-shell).
Still warming up, I run
./bin/python query_friends_neo4j.py 3 10
This creates a query of the form (with varying person ids):
{"statement": "match n:node-[r*3..3]->m:node where n.noscenda_name={target} return count(r);", "parameters": {"target": "person3089"}
after the 7th call or so each call needs around 0.7-0.8 secs.
Now for the real thing (4 hops) I do
./bin/python query_friends_neo4j.py 4 10
creating
{"statement": "match n:node-[r*4..4]->m:node where n.noscenda_name={target} return count(r);", "parameters": {"target": "person3089"}
and each call takes between 65 and 75 secs.
Open questions/thoughts
I'd really like see the claims in the books to be reproducable and correct, and neo4j faster then mysql instead of magnitudes slower.
But I don't know what I am doing wrong... :-(
So, my big hopes are:
I didn't do the memory settings for neo4j correctly
The query I use for neo4j is completely wrong
Any suggestions to get neo4j up to speed are highly welcome.
Thanks a lot,
Joerg
2.0 has not been performance optimized at all, so you should use 1.9.2 for comparison.
(if you use 2.0 - did you create an index for n.noscenda_name)
You can check the query plan with profile start ....
With 1.9 please use a manual index or node_auto_index for noscenda_name.
Can you try these queries:
start n=node:node_auto_index(noscenda_name={target})
match n-->()-->()-->m
return count(*);
Fulltext indexes are also more expensive than exact indexes, so keep the exact auto-index for noscenda_name.
can't get your importer to run, it fails at some point, perhaps you can share the finished neo4j database
python importer.py
reading rels
reading nodes
delete old
Traceback (most recent call last):
File "importer.py", line 9, in <module>
g.query('match n-[r]->m delete r;')
File "/Users/mh/java/neo/neo4j-experiements/neo4jconnector.py", line 99, in query
return self.call(payload)
File "/Users/mh/java/neo/neo4j-experiements/neo4jconnector.py", line 71, in call
self.transactionurl = result.headers['location']
File "/Library/Python/2.7/site-packages/requests-1.2.3-py2.7.egg/requests/structures.py", line 77, in __getitem__
return self._store[key.lower()][1]
KeyError: 'location'
Just to add to what Michael said, in the book I believe the authors are referring to a comparison that was done in the Neo4j in Action book - it's described in the free first chapter of that book.
At the top of page 7 they explain that they were using the Traversal API rather than Cypher.
I think you'll struggle to get Cypher near that level of performance at the moment so if you want to do those types of queries you'll want to use the Traversal API directly and then perhaps wrap it in an unmanaged extension.

Resources