I got a little KB with 25000 Triples and 10 OWL Reasoning Rules (EquivalentClasses).
After a lot of work and desparation i got my Fuseki to work (OWLMicroFBRuleReasoner).
I get reasoned results in like 5ms. My big problem is that the first query after Fuseki starts OR after i inserted some data takes like 50 seconds.
What am i doing wrong? Is it because it does a full reasoning over the whole KB everytime?
How could i fasten it up?
Best regards...
Related
I'm querying my database whit ES 2.3.1 and I've been measuring the times responses, but I got three different times.
First I measured the time of the first query on database. It takes about 9 seconds.
Second time I measured, I closed the ES, cleared the RAM and cache and query again. It takes about 1,2 seconds.
The third time I queried without cleaning caches and it takes 97 ms.
Can anyone explain way it happens?
The last measure I know that its faster because the data already queried is on cache. I think the first time takes more time because the data have to be pulled on cache.
For me, when I clear the cache and RAM, the time of the second measuring had to be equal the first measure, but no. Can someone explain me way?
I am using Apache Cassandra for storing around 100 million records. There is one single node with the following specifications-
RAM-32GB, HDD-2TB, Intel quad core processor.
With cassandra there is a read performance problem. For some queries it takes around 40mins for giving the output. After searching for how to improve the read performance i came to know about the following factors-
Compaction strategy,compression techniques, key cache, increase the heap space, turning off the swap space for cassandra.
After doing these optimizations, the performance remains the same. After seraching, I came around for integrating Hadoop with cassandra.Is it the correct way to do the queries in cassandra or any other factors I am missing here??
Thanks.
It looks like you data model could be improved. 40 minutes is something impossible. I download all data from 6 million records (around 10gb) within few minutes. And think it because I convert data in the process of download and store them. Trivial selects must take milliseconds.
Did you build it on the base of queries that you must do ?
I'm using solr to do indexing on a webapp.
In production my index is pretty small 62.11 KB for 110 Documents.
Not always, but sometimes, the search take up to 12 secondes to complete.
Currently i measure my timing thow a webapp that call solr.
Any idea on what could cause this kind of problem.
I have a couch db application and for most of the views I notice that the time taken by the server to return a response varies from 10ms to 100ms. I do not have any concurrent write operations on the server and there are at the most 10 concurrent read requests.
How should I diagnose the problem ? Where you I look ?
I am running it on a rackspace cloud machine with 1GB RAM.
From the Couchdb Guide:
If you read carefully over the last few paragraphs, one part stands out: “When you query your view, CouchDB takes the source code and runs it for you on every document in the database.” If you have a lot of documents, that takes quite a bit of time and you might wonder if it is not horribly inefficient to do this. Yes, it would be, but CouchDB is designed to avoid any extra costs: it only runs through all documents once, when you first query your view. If a document is changed, the map function is only run once, to recompute the keys and values for that single document.
Most likely you are seeing the views be regenerated and recached.
Is there an optimum length for short strings in mongodb with performance in mind?
I'm currently implementing a comment system limiting the comment length somewhere around 150 - 300 chars and was wondering if there is a string length in that general range that would be more performant than others.
The thing about MongoDB is that performance is generally hardware dependent - the only way you can really find out is to test this on the hardware you'll be using in production, with test data as close to real data as possible.
I've conducted quite a few tests on MongoDB, both on my laptop and on a Xeon server. I noticed horrible results for the laptop, e.g. a bulk insert of 10,000 records would take 90 seconds. But the same test on the server took 0.2 seconds, which I wasn't expecting. Of course the server was going to be faster, but my point is that you can't really make any assumptions on speed, based on other's results.