Really slow Doctrine queries on Bolt backend - doctrine

I just uploaded to a remote server and am getting strange behaviour with Doctrine.
Doctrine is making lots of reuqests to information_schema.tables and takes around 2 seconds for each one, making page requests to the backend around 25-30s. The other queries seem to be happening quickly. What's going on here and how can I stop it?
Output from debugger:
SELECT count(*) FROM information_schema.tables WHERE (table_schema = 'cl50-merc' OR table_catalog = 'cl50-merc') AND table_name = 'bolt_news';
Parameters: []
Time: 1896.29 ms
I should not that it doesn't take this long to make the requests on my local server.

Whenever you move a Bolt website from localhost to remote server, try to delete all the files in 'app/cache' folder except 'index.html' before moving all the files.
If you have already moved, deleting all the files from cache folder should be the last priority after deleting files from 'app/cache/profiler'. If the queries are still slow or the website is taking too much time to load up, then delete all files from 'app/cache' folder and it should be fixed.
Reason: When you are in your localhost, you do lot of things which are not required for the actual website, for example: auto insertion of dummy data in the new categories or so. All these things get cached in the cache folder and when you transfer the whole website to remote server, Bolt still tries to use the cache queries before it actually runs any other queries for the website.

Related

Error 502 with Laravel when exporting to Excel on Azure Web App Linux

I have a Laravel App running on Azure Web App Linux service, all running nice and smoothly until I reach a feature that exports a query to an XLS for download. Then I receive the ERROR 502.
On my local environment works normally, I can export the query to XLS with no issues, it is not a large query, just a few rows.
In the same app, I have a function that exports to XLS just 1 row at a time and works fine, so it is just when I go for a larger(ish) query.
Any ideas? I have tried scaling up, restarting the app, apache, changed .ini (via .htaccess to increase execution time).
There is no trace in the logs either, there is something about the container crashing but cannot trace it to this particular error.
Ok, managed to solved it... was not straight forward at all. It has to do with the size of the query, even tough is not big by any means (a couple thousands max) raising memory limit to 1024M or further ended up in 502 Error. Decided to try different and moved from Laravel Excel to Fast-Excel which is less featured but man... it works. Now everything downloads perfectly. In case you are having this issue give fast-excel a try.

PostgreSQL statistics issue - could not rename temporary statistics file

I am running PotgreSQL 9.4 on Windows, and constantly get the error,
2015-06-15 09:35:36 EDT LOG could not rename temporary statistics file "pg_stat_tmp/global.tmp" to "pg_stat_tmp/global.stat": Permission denied
I also see constant 200-800k writes to global.stat and global.tmp. I have seen other users with the same issue, but no solution.
It is a big database server, with 300g of data, and 6,000 databases.
I tried setting,
track_activities=off
In the config file, but it did not seem to have any affect.
Any help for the error, or reducing the write?
After my initial answer, I decided to research the operation of the stats collector and in particular what it is doing with the files in pg_stat_tmp. I've substantially re-written the answer as a result.
What are the global.stat / global.tmp files used for?
Postgresql contains functionality to collect statistics and status information about its operation. The function is described in Section 27.2 of the manual.
This information is collated by the stats collector process. It is made available to the other postgresql processes via the global.stat file. The first time you run a query that accesses this data within a transaction, the backend which you are connected to will read the global.stat file and cache the result, using it until the end of the transaction.
To keep this file up to date, the stats collector process periodically re-writes it with updated information. It typically does this several times a second. The process is as follows:
Create a new file global.tmp
Write data to this file
Rename global.tmp as global.stat, overwriting the previous global.stat
The global.tmp and global.stats files are written into the directory configured by the stats_temp_directory configuration parameter. Normally this is set to $PGDATA/pg_stat_tmp.
On shutdown, the stats file is written into the file $PGDATA/global/pgstat.stat, and the files in the tmp dir above are removed. This file is then read and removed when the database is started up again.
Why is the stats collector processor creating so much I/O load?
Normally, the amount of data written to the global.stats is relatively modest and writing it does not generate that much I/O traffic. However under some circumstances it does seem to get very bloated. When this happens the amount of load generated can start to get excessive as the entire file is rewritten more than once a second.
I have had one experience where it grew by a factor or 10 or more, compared to other similar servers. This machine did have an unusually large number of databases (for our application at least - 30-40 databases - but nothing like the 6000 you say you have). It is possible that having a large number of databases exacerbates this.
Some of the references below talk about a pattern of creating / dropping lots of tables causing bloat in these files, and that perhaps autovacuum is not running aggressively enough to remove the associated bloat. You may wish to consider your autovac settings.
Why do I get 'Permission Denied' errors on Windows?
After examining the postgresql source code I think there may be a race condition in accessing the global.stats file which could happen at any time, but is exacerbated by the size of the file.
The default mode of operation in Windows is that it is not possible to rename or remove a file while another process has it open. This is different to Linux (or Unix) where a file can be renamed or removed while other processes are accessing it.
In the sequence above you can see that if one of the backend processes is reading the file at the same time as the stats collector is rewriting it, then the backend process may still have the file open at the time the rename is attempted. That leads to the 'Permission Denied' error you are seeing.
Naturally when the file becomes very large, then the amount of time taken to read it becomes more significant, therefore the probability of the stats collector process attempting a rename while a backend still has it open increases.
However, since the file is frequently being rewritten, the impact of these errors is relatively mild. It just means that this particular update fails, leading the the backends getting slightly out of date statistics. The next update will probably succeed.
Note that Windows does offer a file opening mode which does allow files to be deleted or renamed while they are opened by another process, however as far as I could tell, this mode is not used by Postgresql. I could not find any bug report on this - seems like it should be reported.
In summary, these errors are a side effect of the main problem, which is the excessive size of the global.stat file.
I've turned track_activities off but the file is still being written - Why?
From what I can see, track_activites affects only one of the sets of information that the stats collector is collecting.
In addition, it looks as though the stats collector process is started regardless of these settings, and will continue to re-write the file. The settings appear to control only the collection of fresh data.
My conclusion is that once the file has become bloated, it will remain so and continue to be re-written, even once all of the stats collection options are turned off.
What can I do to avoid this problem?
Once the file has become bloated, it seems that the easiest way to get the database back into a good working state is to remove the file, using the following steps:
Stop the database
When the DB is stopped, the pg_stat_tmp directory is empty and a file $PGDATA/global/pgstat.stat is written. We renamed this file to pgstat.stat.old.
Start the database. It creates a fresh set of pgstat files. After confirming the server was operating correctly you can remove the old file you have renamed.
This is the process we used when one of our servers suffered from this problem.
Needless to say be very careful when manually manipulating any files under the Postgresql Data directory.
After this you may want to monitor the server to see if it the file becomes bloated again. If it does then here are some additional ideas to consider:
As mentioned above I have seen some references to this file becoming bloated if autovacuum is not running aggressively enough. You may wish to tune the autovacuum settings
Disabling any of the track_xxx options described in the Section 18.9.1 of the manual which are not required may help
It is possible to place the pg_stats_tmp directory in a tmpfs filesystem (or whatever equivalent RAM based filesystem is available in windows). Doing so should eliminate I/O as a concern for these files.
References:
Postgres stats collector showing high disk I/O
Too much I/O generated by postgres stats collector process
stats collector suddenly causing lots of IO
Here might be a solution for your problem. https://wiki.postgresql.org/wiki/May_2015_Fsync_Permissions_Bug
Another possibility could be antivirus settings. Try to turn it off temporarily.
It happened to me few days ago. I rebooted the machine, but the error did not disappeared.
Don't know why, but performing a vacuum analyze verbose did the trick, and the error has stoped to show up.

Magento - How do i transfer from local to live`?

I know this question has been ask quite a few times, but nothing seems to be working for me...
I really need help. I have made my entire website on my localhost, but hos do i get it up and running live? I've tried everything :( I've copied all of my files onto the live server and looked at endless tutorials, but nothings working. Can you maybe do a video about this or tell med what to do? I really don't want to start ALL OVER on creating all the pages and static blocks and so on.
You have to just change the url in database table. Run the query
SELECT *
FROM `core_config_data`
WHERE `value` LIKE 'http://%';
and change the url from localhost to live server url. Hopefully that'll work. Thanks
I guess that you already created many blocks, cms pages and did a lot customizations in backend in local system. You have to do next:
Copy your magento code completely to server.
On server, delete app/etc/local.xml
Create empty mysql database on the server
Backup your local database
Import that local database into database on server (which you created in step 3.)
Run magento site via browser
because you deleted local.xml file, magento will start installing process and ask you for params for db in server (here, enter data which you used for creation db in step 3, like db nambe, username, password,..). And that's it. Magento will make connection with that db and you will have everything you had in local.
One more thing which I forgot:
you have to change in database on field (you will change this on live database after importing local database, that means after step 5). There should be table core_config_data. Do search in that table, and wherever you find you local url like:
http://localhost/magento/
or something like that, you should change to your real domain, for example to:
http://my-magento-domain.com/

Reduce inodes count on Magento website

I am getting errors on my website and my website inodes count is overload. The hosting inodes limit is 200,000 but my website inodes count is 909,496 and I can't even open phpMyAdmin. The hosting support asked me to remove unused files. How can I decrease the inodes count and which files are unused in Magento based website?
Usually an indicator that you need a more capable hosting provider.
The major places that Magento creates files during operation are in the var/ folder and your product image cache.
If you've never checked before, the following areas can accumulate a phenomenal amount of detritus. Using an ftp client, check the following areas in your var/ folder:
Check that you don't have a bazillion sessions files in var/session, remove anything older than current date
Check that there aren't an excessive amount of files in var/report, you might want to find out why Magento is generating them and fix the issue. Delete them all.
Logging will generate over time several huge files in var/log, delete them and then look at the new ones to find out what errors are being generated.
Imports and other stuff can cause temporary files to accumulate in var/tmp, delete them. Also check in var/import for old imports that can be deleted
Stored database backups are kept in var/backup, using the admin backend System > Tools > Backups:
Download the latest database backups to a local workstation and delete all backups.
Magento uses a lot of caching to store information, the biggest will be the Image Cache if you have a large catalog, and it will contain cached images from the beginning of time, and lots of useless ones if you've deleted product over time. Using the Admin backend, go into System > Cache Management:
Clear the Magento Cache.
Flush Catalog Images Cache.
Magento does not delete product images when you delete product. In fact Magento would be a prime candidate for appearing on one of those Hoarder programs that were prevalent on TV there for a while.
After you get the site working, consider installing ImageClean.
Hopefully this will have reduced your inode count enough to carry out the following operations. Before proceeding, do a couple database backups and store off server!!!
Next step is to ask your hosting provider if they include your database in that inode table count. If they do, you are kind of stuck as Magento uses innodb and likely, they've cheaply not set up MySQL to use files-per-table so you can resize the innodb file size by optimizing each table. Ask them if they use files-per-table when they set up MySQL, if they don't know what it is, develop that sinking feeling in the pit of your stomach.
Some tables that get excessively huge, especially if you've haven't properly set set up the Magento master cron job trigger in your cPanel and checked to make sure log table cleaning is enabled in System > Configuration > Advanced > System > Log Cleaning. These tables are as follows:
'dataflow_batch_export',
'dataflow_batch_import',
'log_customer',
'log_quote',
'log_summary',
'log_summary_type',
'log_url',
'log_url_info',
'log_visitor',
'log_visitor_info',
'log_visitor_online',
'index_event',
'report_event',
'report_viewed_product_index',
'report_compared_product_index',
'catalog_compare_item',
'catalogindex_aggregation',
'catalogindex_aggregation_tag',
'catalogindex_aggregation_to_tag'
Magento has a built-in script to clean the logs. If running this crashes with a memory error because you've never set the cron job up and there's too much bloat to clean out, Crucial Web Host has a script that can be run to manually delete all log file contents. including the dataflow tables which won't be cleaned out by the Magento log cleaning process. If you use dataflow import/export a lot, Nexcess has a script that can check on the dataflow tables size and clear them as well.
After cleaning the database, you will need to use phpMyAdmin to optimize each table in your Magento database. If the hosting provider hasn't set up files-per-table in MySQL, it will do squat for reducing your inode count.
After all that, don't bother messing with deleting application files or anything else Magento uses. It doesn't really accumulate that much aside from the var/ folders and the Image cache and you likely will end up with a dead website.
At this point, you're at the mercy of a shared server hosting plan that has decided to be fair to everyone by limiting what can be done in each account and doesn't allow enough resources to run Magento. Start looking for a hosting provider that supports Magento, often they don't bother limiting your inode count (a cheap trick to allow too many people to share a hard drive) as they offer plenty of disk space for you to run your e-commerce website.

Magento reindex does not populate solr search

We have solr installed as a tomcat application. With the default installation of EE magento with sample data all runs perfect.
However, with a couple of our stores running the index does not touch the /solr/data folder. Normally when a reindex is called all files in /data are cleared and repopulated. However, on the live site this does not happen.
I have stipped back a lot of extesnions and run the index again.. this time the files are recreated but almost no data is captured... even less than the sample stores despite a much larger database.
Anyone any clues as to where we should be looking?

Resources