I have about 25.000 rows in my DB table 'movies' (InnoDB, 17.5 mb)
And when I try to get them all to display in my admin panel, nothing happens. Just 5-8 seconds pending and white screen. No displayed errors, just nothing. (max execution time is 3600 seconds, because it's on my local machine). My simple as hell code:
public function index()
{
$data['movies'] = Movies::all();
dd('This var_dump & die never fires');
// return view('admin.movies', $data);
}
I just wonder why it not performs the query and just die without declaration of war.
I didn't found anything interesting in .ENV or config/database.php to explain what happens in such situations.
PS. yes, I can make serverside pagination and search, and take only 10-25 records from the DB, question is not about that.
Looks like you are running out of memory. Try quering half, of the results, or maybe just 100 to see if that at least fixes the white page, if so use chunk:
Movies::chunk(200, function($movies)
{
foreach($movies $movie)
{
var_dump($movie);
}
});
You should definitely look at your storage\logs directory to verify the error. It's quite possible that it takes to much memory getting 25k rows.
In fact as you mentioned in real life there is no need to get so many rows because unless you export them into CSV or XLS.
Related
I have a python (3.2) request that goes to MongoDB and the request itself is running fast enough. When I then perform an if statement check to see if any records were found it takes 50 times as long:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
58 27623 6475988 234.4 1.7 itemInDB = db.mainData.find({"x":item[x]}).limit(1)
59
60 #existing item in db
61 27623 293419802 10622.3 77.6 if itemInDB.count():
What on earth is the cause for that if statement taking so long?! I presume there must be a better way to check if a record was found but google has come up empty.
Thanks for the help.
Perhaps a Better Way
If you're only interested in returning one value, you might want to use find_one instead of find. It will stop looking for values after one has been found, as opposed to find, which has to run through the collection:
itemInDB = db.mainData.find_one({"x":item[x]})
if itemInDB:
print("Item found")
else:
print("Item not found")
For Your Example
According to the PyMongo docs, when querying the count of a cursor, you can pass in a parameter (True or False) to take into account any skip or limit calls previously made to the cursor. The default for that parameter is False (namely, not taking those calls into account). That may be affecting the performance of your count query.
Gauging Query Performance
If you want to see how your query will be carried out by mongo, you can call explain on your cursor:
db.coll.find({"x":4}).explain()
The explain function is also implemented in PyMongo.
Turns out it was due to the find() function and not the if statement. I created an index on "x" (as I should have anyway). Changed the find to find_one and removed the .count() from the if statement. Overall 75% faster.
One of the main purposes of caching is to save resources and not do things like hit your database every request. In light of this, I'm confused by what all Codeigniter does in a controller when it encounters a cache() statement.
For example:
$this->output->cache(5);
$data=$this->main_model->get_data_from_database();
$this->load->view("main/index", $data);
I realize that the cached main/index html file will show for the next 5 minutes, but during these 5 minutes will the controller still execute the get_data_from_database() step? Or will it just skip it?
Note: the Codeigniter documentation says you can put the cache() statement anywhere in the controller function, which confuses me even more about whats getting executed.
I can answer my own question. NOTHING in the controller function other than the cached output gets executed during the time in which the cache is set.
To test this yourself, do a database INSERT or something that would be logged somehow (e.g. write to a blank file).
I added the following code below my cache() statement and it only inserted into the some_table table the first time I loaded the controller function and not the 2nd time (within the 5 minute span).
$this->db->insert('some_table', array('field_name' => 'value1') );
I think this can be verified enabling the Profiler in your controller and check if any query is done. Make sure this is enabled only for your IP if you're using it in Production environment.
$this->output->enable_profiler(TRUE);
-- EDIT 1 --
This will be visible only once. Soon after the cached page is stored, the profiles result won't be visible again (so you might wanna delete the file and refresh the page).
-- EDIT 2 --
You might also use:
log_message('info', 'message');
inside your model, then change in config.php, $config['log_threshold'] to 3 and check the log file.
-- EDIT 3 --
For sure the selection will be done unless you have enabled the database cache. In this case, in the cache folder you'll see the database selection cached.
I'm trying to make an ajax autocomplete search box that of course uses SQL, min 3 characters, and have a SQL view of relevant fields already set up and indexed in the db. The CPU still spikes when searching, which I expected as it's running a query for every character. I want to use Zend shm cache to speed up results and reduce CPU usage. The results are stored in an array which is to be cached like this:
while($row = db2_fetch_row($stmt)) {
$fSearch[trim($row[0]).trim($row[1])] = array(/*array built here*/);
}
if (zend_shm_cache_store('fSearch', $fSearch, 10 * 60) === false) {
error_log('Failed to store search cache!');
}
Of course there's actual data inside the array instead of comments, I just shortened the code for simplicity. Rows 0&1 form the PK, and this has tested to be working properly. It's the zend_shm_cache_store that fails because the error log gets flooded with 'Failed to store search cache!'. I read that zend_shm_cache_store can store any array that can be serialized - how can I tell if my data is serialized or can be serialized? Are there any other potential causes? I did make a test page that only stored a string and that was successful, so I know caching is on.
Solved: cache size was too small for array - increased cache size and it worked fine. Sorry for the trouble.
I'm posting this question here because I'm not sure it's a WordPress issue.
I'm running XAMPP on my local system, with 512MB max headroom and a 2.5-hour php timeout. I'm importing about 11,000 records into the WordPress wp_user and wp_usermeta tables via a custom script. The only unknown quantity (performance-wise) on the WordPress end is the wp_insert_user and update_user_meta calls. Otherwise it's a straight CSV import.
The process to import 11,000 users and create 180,000 usermeta entries took over 2 hours to complete. It was importing about 120 records a minute. That seems awfully slow.
Are there known performance issues importing user data into WordPress? A quick Google search was unproductive (for me).
Are there settings I should be tweaking beyond the timeout in XAMPP? Is its mySQL implementation notoriously slow?
I've read something about virus software dramatically slowing down XAMPP. Is this a myth?
yes, there are few issues with local vs. hosted. One of the important things to remember is the max_execution time for php script. You may need to reset the timer once a while during the data upload.
I suppose you have some loop which takes the data row by row from CSV file for example and uses SQL query to insert it into WP database. I usually put this simple snippet into my loop so it will keep the PHP max_exec_time reset:
$counter = 1;
// some upload query
if (($handle = fopen("some-file.csv", "r")) !== FALSE) {
while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) {
mysql_query..... blablabla....
// snippet
if($counter == '20') // this count 20 loops and resets the counter
{
set_time_limit(0);
$counter = 0;
}
$counter = $counter + 1;
} //end of the loop
.. also BTW 512MB room is not much if the database is big. Count how much resources is taking your OS and all running apps. I have ove 2Gb WO database and my MySql needs a lot of RAM to run fast. (depends on the query you are using as well)
In Magento I write a number of small command line scripts to do things like set a new attribute on a number of products. I am finding that the time it takes to update 900 products takes about 6 hours to complete.
The time it takes to load the individual products goes as fast as I would except, but the act of saving once I have made the change takes a very long time.
I am attaching how I am loading the products in case there is something I can do to better optimize the process. Any help here would be greatly appreciated.
$product = Mage::getModel('catalog/product')->load($magento_id);
$product->setMadeInUsa(1);
try {
$product->save();
} catch(Exception $e) {
echo "ERROR: " . $e->getMessage() . "\n";
}
The code runs without error, but it takes forever.
Mage::getSingleton('catalog/product_action')
->updateAttributes(array($product->getId()), $attributesData, $storeId);
This code only updates the attributes you want to change. The first paramater is an array of product IDs, the second is an array of attribute names and values, and then the third is the store ID you wish to update.
This is MUCH faster than saving the entire model.
Try first seting indexing to Manual and then reindex after update is done. This should improve the performance. However the ultimate solution, if you are going to do the import often, is to follow the code ideas you can find in update attributes mass action, which is optimized for saving many products at once.