Why does collections.find({}) takes over 9 secs for 250 objects (MongoMapper) - ruby

I am running the following query and it is take on average 9 seconds to return the results. There are no filters on it, so I am not sure an index would help. Why is this running so slowly? There are only 250 objects in there, and only 4 fields (all text).
Country.collection.find({},:fields => ['country_name', 'country_code']).to_json
"cursor":"BasicCursor",
"nscanned":247,
"nscannedObjects":247,
"n":247,
"millis":0,
"nYields":0,
"nChunkSkips":0,
"isMultiKey":false,
"indexOnly":false,
"indexBounds":{},
"allPlans":[{"cursor":"BasicCursor","indexBounds":{}}]
The cpu, memory and disk on the machine do not even notice the query run. Any help would be appreciated.

Create indexes on the 'country_name' fiels using :
db.countries.ensureIndex({country_name:1});
That will speed your query enormously
You can learn more about indexes here
PS-
you can type 'it' to display more when you see the 'has more' phrase, or you can display all the result without the 'has more' by using:
db.countries.find({}, {'country_name' : 1, 'country_code' : 1}).forEach(printjson)
and you can always set the profiler by using:
>use databaseName;
> db.setProfilingLevel(2); // 2 tell the profiler to catch everything happened inside the DB
You can learn more about profiler here
and you can display the data inside the profiler using
> db.system.profile.find()
This method will give you more info about your database and what's going on inside.

Related

Laravel 5: Heavy Select Query

I have about 25.000 rows in my DB table 'movies' (InnoDB, 17.5 mb)
And when I try to get them all to display in my admin panel, nothing happens. Just 5-8 seconds pending and white screen. No displayed errors, just nothing. (max execution time is 3600 seconds, because it's on my local machine). My simple as hell code:
public function index()
{
$data['movies'] = Movies::all();
dd('This var_dump & die never fires');
// return view('admin.movies', $data);
}
I just wonder why it not performs the query and just die without declaration of war.
I didn't found anything interesting in .ENV or config/database.php to explain what happens in such situations.
PS. yes, I can make serverside pagination and search, and take only 10-25 records from the DB, question is not about that.
Looks like you are running out of memory. Try quering half, of the results, or maybe just 100 to see if that at least fixes the white page, if so use chunk:
Movies::chunk(200, function($movies)
{
foreach($movies $movie)
{
var_dump($movie);
}
});
You should definitely look at your storage\logs directory to verify the error. It's quite possible that it takes to much memory getting 25k rows.
In fact as you mentioned in real life there is no need to get so many rows because unless you export them into CSV or XLS.

python slow to check if mongodb record found

I have a python (3.2) request that goes to MongoDB and the request itself is running fast enough. When I then perform an if statement check to see if any records were found it takes 50 times as long:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
58 27623 6475988 234.4 1.7 itemInDB = db.mainData.find({"x":item[x]}).limit(1)
59
60 #existing item in db
61 27623 293419802 10622.3 77.6 if itemInDB.count():
What on earth is the cause for that if statement taking so long?! I presume there must be a better way to check if a record was found but google has come up empty.
Thanks for the help.
Perhaps a Better Way
If you're only interested in returning one value, you might want to use find_one instead of find. It will stop looking for values after one has been found, as opposed to find, which has to run through the collection:
itemInDB = db.mainData.find_one({"x":item[x]})
if itemInDB:
print("Item found")
else:
print("Item not found")
For Your Example
According to the PyMongo docs, when querying the count of a cursor, you can pass in a parameter (True or False) to take into account any skip or limit calls previously made to the cursor. The default for that parameter is False (namely, not taking those calls into account). That may be affecting the performance of your count query.
Gauging Query Performance
If you want to see how your query will be carried out by mongo, you can call explain on your cursor:
db.coll.find({"x":4}).explain()
The explain function is also implemented in PyMongo.
Turns out it was due to the find() function and not the if statement. I created an index on "x" (as I should have anyway). Changed the find to find_one and removed the .count() from the if statement. Overall 75% faster.

How to find queries not using indexes or slow in mongodb

is there a way to find queries in mongodb that are not using Indexes or are SLOW? In MySQL that is possible with the following settings inside configuration file:
log-queries-not-using-indexes = 1
log_slow_queries = /tmp/slowmysql.log
The equivalent approach in MongoDB would be to use the query profiler to track and diagnose slow queries.
With profiling enabled for a database, slow operations are written to the system.profile capped collection (which by default is 1Mb in size). You can adjust the threshold for slow operations (by default 100ms) using the slowms parameter.
First, you must set up your profiling, specifying what the log level that you want. The 3 options are:
0 - logger off
1 - log slow queries
2 - log all queries
You do this by running your mongod deamon with the --profile options:
mongod --profile 2 --slowms 20
With this, the logs will be written to the system.profile collection, on which you can perform queries as follows:
find all logs in some collection, ordering by ascending timestamp:
db.system.profile.find( { ns:/<db>.<collection>/ } ).sort( { ts: 1 } );
looking for logs of queries with more than 5 milliseconds:
db.system.profile.find( {millis : { $gt : 5 } } ).sort( { ts: 1} );
You can use the following two mongod options. The first option fails queries not using index (V 2.4 only), the second records queries slower than some ms threshold (default is 100ms)
--notablescan
Forbids operations that require a table scan.
--slowms <value>
Defines the value of “slow,” for the --profile option. The database logs all slow queries to the log, even when the profiler is not turned on. When the database profiler is on, mongod the profiler writes to the system.profile collection. See the profile command for more information on the database profiler.
You can use the command line tool mongotail to read the log from the profiler within a console and with a more readable format.
First activate the profiler and set the threshold in milliseconds for the profile to consider an operation to be slow. In the following example the threshold is set to 10 milliseconds for a database named "sales":
$ mongotail sales -l 1
Profiling level set to level 1
$ mongotail sales -s 10
Threshold profiling set to 10 milliseconds
Then, to see in "real time" the slow queries, with some extra information like the time each query took, or how many registries it need to "walk" to find a particular result:
$ mongotail sales -f -m millis nscanned docsExamined
2016-08-11 15:09:10.930 QUERY [ops] : {"deleted": {"$exists": false}, "prod_id": "367133"}. 8 returned. nscanned: 344502. millis: 12
2016-08-11 15:09:10.981 QUERY [ops] : {"deleted": {"$exists": false}, "prod_id": "367440"}. 6 returned. nscanned: 345444. millis: 12
....
In case somebody ends up here from Google on this old question, I found that explain really helped me fix specific queries that I could see were causing COLLSCANs from the logs.
Example:
db.collection.find().explain()
This will let you know if the query is using a COLLSCAN (Basic Cursor) or an index (BTree), among other things.
https://docs.mongodb.com/manual/reference/method/cursor.explain/
While you can obviously use Profiler a very neat feature of Mongo DB due to which I actually fall in love with it is Mongo DB MMS.
Takes less than 60 seconds and can manage from anywhere. I am sure you will Love it.
https://mms.mongodb.com/

Zend Framework Cache

I'm trying to make an ajax autocomplete search box that of course uses SQL, min 3 characters, and have a SQL view of relevant fields already set up and indexed in the db. The CPU still spikes when searching, which I expected as it's running a query for every character. I want to use Zend shm cache to speed up results and reduce CPU usage. The results are stored in an array which is to be cached like this:
while($row = db2_fetch_row($stmt)) {
$fSearch[trim($row[0]).trim($row[1])] = array(/*array built here*/);
}
if (zend_shm_cache_store('fSearch', $fSearch, 10 * 60) === false) {
error_log('Failed to store search cache!');
}
Of course there's actual data inside the array instead of comments, I just shortened the code for simplicity. Rows 0&1 form the PK, and this has tested to be working properly. It's the zend_shm_cache_store that fails because the error log gets flooded with 'Failed to store search cache!'. I read that zend_shm_cache_store can store any array that can be serialized - how can I tell if my data is serialized or can be serialized? Are there any other potential causes? I did make a test page that only stored a string and that was successful, so I know caching is on.
Solved: cache size was too small for array - increased cache size and it worked fine. Sorry for the trouble.

Limitation in retrieving rows from a mongodb from ruby code

I have a code which gets all the records from a collection of a mongodb and then it performs some computations.
My program takes too much time as the "coll_id.find().each do |eachitem|......." returns only 300 records at an instant.
If I place a counter inside the loop and check it prints 300 records and then sleeps for around 3 to 4 seconds before printing the counter value for next set of 300 records..
coll_id.find().each do |eachcollectionitem|
puts "counter value for record " + counter.to_s
counter=counter +1
---- My computations here -----
end
Is this a limitation of ruby-mongodb api or some configurations needs to be done so that the code can get access to all the records at one instant.
How large are your documents? It's possible that the deseriaization is taking a long time. Are you using the C extensions (bson_ext)?
You might want to try passing a logger when you connect. That could help sort our what's going on. Alternatively, can you paste in the MongoDB log? What's happening there during the pause?

Resources