I have a medium-sized neo4j database with about 700000 nodes and 1-5 outgoing relations on each node.
If I use browser interface for querying nodes on indexed attribute and finding adjacent nodes, it takes about 1500ms, which is fine for me.
MATCH (n {id_str : 'some_id'})-->(child) return child.id_str
...
Returned 2 rows in 1655 ms
But if I run a similar Cypher query mentioning relations using Ruby Neography library it tooks a couple of minutes to complete.
lookup_links = "MATCH (n {id_str : {id_str}})-[:internal_link]->(child) return child.id_str"
links = #neo.execute_query(lookup_links, :id_str => id_str)
And after that regular browser queries become extremely slow too, taking about two minutes each.
MATCH (n2 {id_str : 'some_id'})-->(child) return child.id_str
Returned 2 rows in 116201 ms
I run the experiments on 64bit ubuntu 14.04 laptop with 8GB ram and 1GB heap for neo4j. Neo4j version is 2.1.3 installed from official deb packet. Neography version is 1.6.0. I use MRI-1.9.3.
I've done a stackdump using kill -3 while neo is busy serving the query.
https://gist.github.com/akamaus/a06bc9e04c7209c480e9
Any Ideas what's going wrong and how to investivate it?
Related
I have a large database (100M rows) indexed by SphinxSearch. Each search takes 0.1-0.5s. However, if I run 10 searches concurrently, they take 20s on average.
Is it the expected behaviour of SphinxSearch?
Should I adjust the config or move to another search engine for concurrency?
My config file is simple:
searchd
{
listen = 9312
listen = 9306:mysql41
pid_file = /var/searchd.pid
read_timeout = 30
log = /var/log/sphinxsearch/searchd.log
query_log = /var/log/sphinxsearch/query.log
}
Is it the expected behaviour of SphinxSearch?
It heavily depends on the number of CPUs. If you have more than 10 physical CPUs then latency degradation from 0.5 sec to 20 sec by increasing the concurrency from 1 to 10 is definitely not expected. In this case first of all make sure all your CPUs are busy under the concurrency load. If it's not - depending on your Sphinx version and multi-tasking mode let it run with more threads.
Should I adjust the config or move to another search engine for concurrency?
I recommend Manticore Search as:
it's open source - https://github.com/manticoresoftware/manticoresearch/
it's the only fork of Sphinx and if you are familiar with Sphinx in general it shouldn't be a problem to migrate
hundreds of bugs have been fixed
the multi-tasking mode is completely different (coroutines)
I am using Grafana version 5.1.3 (commit: 087143285) ,InfluxDB shell version: 1.5.2 along with jmeter.
There are 13 panels. Panel is taking 5 to 8 seconds to load.
Below query is running for panel:(When I run the same query on db server it is running very fast )
SELECT mean(“startedThreads”) FROM “virtualUsers” WHERE time >= 1537865329564ms and time <= 1537867129564ms GROUP BY time(60s) fill(null);
EXPLAIN ANALYZE
execution_time: 157.341µs
planning_time: 626.44µs
total_time: 783.781µs
SELECT count(“responseTime”)/60 FROM “requestsRaw” WHERE time >= 1537865329564ms and time <= 1537867129564ms GROUP BY time(60s) fill(null)"
execution_time: 535.011µs
planning_time: 1.805892ms
total_time: 2.340903ms
Below is memory and cpu details.Influx db and Grafans are hosted on same server.
free -g
total used free shared buff/cache available
Mem: 15 3 11 0 1 12
Swap: 7 0 6
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
And as per my initial understanding Grafana minimum memory requirement is 249MB.So memory is not problem for Grafana.
Please let me if you need more details.
It is odd that the query runs fast while Grafana needs a long time. Panels should be diplayed as soon as Grafana gets a response.
Since rendering is done in the Browser AFAIK this could be the bottleneck. So if your Browser runs on a Raspberry Pi 1, please try using a different computer.
It is not clear if all Panels need a long time to load or if it is just one Panel that needs a long time. You should try to find out if the loading time is related to just one Panel.
Lastly consider that all queries are sent at the same time, so making just one query to the server from CLI may not be representative. You could try to spliting the Dashboard in multiple Dashboards to improve the loading time.
I have Neo4j server running inside a virtual machine using Ubuntu 13.10 and I am accessing via REST using Cypher queries. The virtual machine has 4 GB of memory allocated to it.
I've changed the open file count to 40000, set the initial JVM heap to 1G and my neo4j.properties file is as follows:
neostore.nodestore.db.mapped_memory=250M
neostore.relationshipstore.db.mapped_memory=100M
neostore.propertystore.db.mapped_memory=100M
neostore.propertystore.db.strings.mapped_memory=100M
neostore.propertystore.db.arrays.mapped_memory=100M
keep_logical_logs=3 days
node_auto_indexing=true
node_keys_indexable=id
I've also updated sysctl based on the Neo4j Linux tuning guide:
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
Since I am testing queries, the basic routine is to run my suite of tests and then delete all of the nodes and run them all again. At the start of each test run, the database has 0 nodes in it. My suite of tests of about 100 queries is taking 22 seconds to run. Basic parameterized creates such as:
CREATE (x:user { email: {param0},
name: {param1},
displayname: {param2},
id: {param3},
href: {param4},
object: {param5} })
CREATE x-[:LOGIN]->(:login { password: {param6},
salt: {param7} } )
are currently taking over 170ms to execute (and that's the average, first query time is 700ms). During a test run, the CPU in the VM never exceeds 50% and memory usage is at a steady 1.4Gb.
Why would creating a single node in an empty database take 170ms? At this point unit testing is becoming almost impossible since it is so slow. This is my first time trying to tune Neo4j so I'm not really sure how to figure out where the problem is or what changes should be made.
Additional Details
I'm using Go 1.2 to make REST calls to the cypher endpoint (http://localhost:7474/db/data/cypher) of a locally installed Neo4j instance. I'm setting the request headers for content-type to "application/json", accept to "application/json" and "X-Stream" to true. I always return either an array of maps or nothing depending on the query.
It seems like the creates are the problem and are taking forever. For example:
2014/01/15 11:35:51 NewUser took 123.314938ms
2014/01/15 11:35:51 NewUser took 156.101784ms
2014/01/15 11:35:52 NewUser took 167.439442ms
2014/01/15 11:35:52 ValidatePassword took 4.287416ms
NewUser creates two new nodes and one relationship and is taking 167ms, while ValidatePassword is a read-only operation and it completes in 4ms. Also note that the three calls to NewUser are identical parameterized queries. While the creates are the big problem, I'm also a little concerned that Neo4j is taking 4ms to just find a labeled node when there are only 100 nodes in the database.
I do not restart the server in between test runs or delete the database. I issue a single delete all nodes query MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r at the end of the test run. Running the same test suite multiple times back to back does not improve the query times.
Are your 100 queries all the same only with different parameters, or actually 100 different queries?
What you see is actually setup work. The parser has to load the parsing rules initially that takes a few ms. Also new queries that have not been seen are compiled, planned and put in the query cache.
So the first query always takes a bit longer. But as you parametrize all subsequent ones should be fast.
Can you confirm that?
I think you see the transactional overhead of flushing the transaction to disk.
Did you try to batch more requests into one? I.e. with the transactional endpoint? Or the /db/data/batch (but I'd rather use the new tx-endpoint /db/data/transaction).
Did you create an index for your lookup property for your validate query?
Can you do me a favor and test your create query without a label? I found some perf issues when testing that myself earlier this week.
Just ran a test with curl
for i in `seq 1 10`; do time curl -i -H content-type:application/json -H accept:application/json -H X-Stream:true -d #perf_test.json http://localhost:7474/db/data/cypher; done
I'm getting between 16 and 30ms per request externally including starting curl
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8; stream=true
Access-Control-Allow-Origin: *
Transfer-Encoding: chunked
Server: Jetty(9.0.5.v20130815)
{"columns":[],"data":[]}
real 0m0.016s
user 0m0.005s
sys 0m0.005s
Perhaps it is rather the VM (disk or network) or the cross-vm communication?
Did another test with ab and 1000 requests for both endpoints, got a mean of about 5 ms both times.
https://gist.github.com/jexp/8452037
is there a way to find queries in mongodb that are not using Indexes or are SLOW? In MySQL that is possible with the following settings inside configuration file:
log-queries-not-using-indexes = 1
log_slow_queries = /tmp/slowmysql.log
The equivalent approach in MongoDB would be to use the query profiler to track and diagnose slow queries.
With profiling enabled for a database, slow operations are written to the system.profile capped collection (which by default is 1Mb in size). You can adjust the threshold for slow operations (by default 100ms) using the slowms parameter.
First, you must set up your profiling, specifying what the log level that you want. The 3 options are:
0 - logger off
1 - log slow queries
2 - log all queries
You do this by running your mongod deamon with the --profile options:
mongod --profile 2 --slowms 20
With this, the logs will be written to the system.profile collection, on which you can perform queries as follows:
find all logs in some collection, ordering by ascending timestamp:
db.system.profile.find( { ns:/<db>.<collection>/ } ).sort( { ts: 1 } );
looking for logs of queries with more than 5 milliseconds:
db.system.profile.find( {millis : { $gt : 5 } } ).sort( { ts: 1} );
You can use the following two mongod options. The first option fails queries not using index (V 2.4 only), the second records queries slower than some ms threshold (default is 100ms)
--notablescan
Forbids operations that require a table scan.
--slowms <value>
Defines the value of “slow,” for the --profile option. The database logs all slow queries to the log, even when the profiler is not turned on. When the database profiler is on, mongod the profiler writes to the system.profile collection. See the profile command for more information on the database profiler.
You can use the command line tool mongotail to read the log from the profiler within a console and with a more readable format.
First activate the profiler and set the threshold in milliseconds for the profile to consider an operation to be slow. In the following example the threshold is set to 10 milliseconds for a database named "sales":
$ mongotail sales -l 1
Profiling level set to level 1
$ mongotail sales -s 10
Threshold profiling set to 10 milliseconds
Then, to see in "real time" the slow queries, with some extra information like the time each query took, or how many registries it need to "walk" to find a particular result:
$ mongotail sales -f -m millis nscanned docsExamined
2016-08-11 15:09:10.930 QUERY [ops] : {"deleted": {"$exists": false}, "prod_id": "367133"}. 8 returned. nscanned: 344502. millis: 12
2016-08-11 15:09:10.981 QUERY [ops] : {"deleted": {"$exists": false}, "prod_id": "367440"}. 6 returned. nscanned: 345444. millis: 12
....
In case somebody ends up here from Google on this old question, I found that explain really helped me fix specific queries that I could see were causing COLLSCANs from the logs.
Example:
db.collection.find().explain()
This will let you know if the query is using a COLLSCAN (Basic Cursor) or an index (BTree), among other things.
https://docs.mongodb.com/manual/reference/method/cursor.explain/
While you can obviously use Profiler a very neat feature of Mongo DB due to which I actually fall in love with it is Mongo DB MMS.
Takes less than 60 seconds and can manage from anywhere. I am sure you will Love it.
https://mms.mongodb.com/
UPDATE I have put up a follow up question that contains updated scripts and and a clearer setup on neo4j performance compared to mysql (how can it be improved?). Please continue there./UPDATE
I have some problems verifying the performance claims made in the "graph databases" book (page 20) and in the neo4j (chapter 1).
To verify these claims I created a sample dataset of 100000 'person' entries with 50 'friends' each, and tried to query for e.g. friends 4 hops away. I used the very same dataset in mysql. With friends of friends over 4 hops mysql returns in 0.93 secs, while neo4j needs 65 -75 secs (on repeated calls).
How can I improve this miserable outcome, and verify the claims made in the books?
A bit more detail:
I run the whole setup on a i5-3570K with 16GB Ram, using ubuntu12.04 64bit, java version "1.7.0_25" and mysql 5.5.31, neo4j-community-2.0.0-M03 (I get a similar outcome with 1.9)
All code/sample data can be found on https://github.com/jhb/neo4j-experiements/ (to be used with 2.0.0). The resulting sample data in different formats can be found on https://github.com/jhb/neo4j-testdata.
To use the scripts you need a python with mysql-python, requests and simplejson installed.
the dataset is created with friendsdata.py and stored to friends.pickle
friends.pickle gets imported to neo4j using import_friends_neo4j.py
friends.pickle gets imported to mysql using import_friends_mysql.py
I add indexes on t_user_friend.* in mysql
I added "create index on :node(noscenda_name) in neo4j
To make life a bit easier the friends.*.bz2 contain sql and cypher statements to create those datasets in mysql and neo4j 2.0 M3.
Mysql performance
I first warm mysql up by querying:
select count(distinct name) from t_user;
select count(distinct name) from t_user;
Then, for the real meassurment I do
python query_friends_mysql.py 4 10
This creates the following sql statement (with changing t_user.names):
select
count(*)
from
t_user,
t_user_friend as uf1,
t_user_friend as uf2,
t_user_friend as uf3,
t_user_friend as uf4
where
t_user.name='person8601' and
t_user.id = uf1.user_1 and
uf1.user_2 = uf2.user_1 and
uf2.user_2 = uf3.user_1 and
uf3.user_2 = uf4.user_1;
and repeats this 4 hop query 10 times. The queries need around 0.95 secs each. Mysql is configured to use a key_buffer of 4G.
neo4j performance testing
I have modified neo4j.properties:
neostore.nodestore.db.mapped_memory=25M
neostore.relationshipstore.db.mapped_memory=250M
and the neo4j-wrapper.conf:
wrapper.java.initmemory=2048
wrapper.java.maxmemory=8192
To warm up neo4j I do
start n=node(*) return count(n.noscenda_name);
start r=relationship(*) return count(r);
Then I start using the transactional http endpoint (but I get the same results using the neo4j-shell).
Still warming up, I run
./bin/python query_friends_neo4j.py 3 10
This creates a query of the form (with varying person ids):
{"statement": "match n:node-[r*3..3]->m:node where n.noscenda_name={target} return count(r);", "parameters": {"target": "person3089"}
after the 7th call or so each call needs around 0.7-0.8 secs.
Now for the real thing (4 hops) I do
./bin/python query_friends_neo4j.py 4 10
creating
{"statement": "match n:node-[r*4..4]->m:node where n.noscenda_name={target} return count(r);", "parameters": {"target": "person3089"}
and each call takes between 65 and 75 secs.
Open questions/thoughts
I'd really like see the claims in the books to be reproducable and correct, and neo4j faster then mysql instead of magnitudes slower.
But I don't know what I am doing wrong... :-(
So, my big hopes are:
I didn't do the memory settings for neo4j correctly
The query I use for neo4j is completely wrong
Any suggestions to get neo4j up to speed are highly welcome.
Thanks a lot,
Joerg
2.0 has not been performance optimized at all, so you should use 1.9.2 for comparison.
(if you use 2.0 - did you create an index for n.noscenda_name)
You can check the query plan with profile start ....
With 1.9 please use a manual index or node_auto_index for noscenda_name.
Can you try these queries:
start n=node:node_auto_index(noscenda_name={target})
match n-->()-->()-->m
return count(*);
Fulltext indexes are also more expensive than exact indexes, so keep the exact auto-index for noscenda_name.
can't get your importer to run, it fails at some point, perhaps you can share the finished neo4j database
python importer.py
reading rels
reading nodes
delete old
Traceback (most recent call last):
File "importer.py", line 9, in <module>
g.query('match n-[r]->m delete r;')
File "/Users/mh/java/neo/neo4j-experiements/neo4jconnector.py", line 99, in query
return self.call(payload)
File "/Users/mh/java/neo/neo4j-experiements/neo4jconnector.py", line 71, in call
self.transactionurl = result.headers['location']
File "/Library/Python/2.7/site-packages/requests-1.2.3-py2.7.egg/requests/structures.py", line 77, in __getitem__
return self._store[key.lower()][1]
KeyError: 'location'
Just to add to what Michael said, in the book I believe the authors are referring to a comparison that was done in the Neo4j in Action book - it's described in the free first chapter of that book.
At the top of page 7 they explain that they were using the Traversal API rather than Cypher.
I think you'll struggle to get Cypher near that level of performance at the moment so if you want to do those types of queries you'll want to use the Traversal API directly and then perhaps wrap it in an unmanaged extension.