My Joomla! site loads very slowly and sometimes return an error which is:
Fatal error: Maximum execution time of 30 seconds exceeded in D:\Hosting\6926666\html\libraries\joomla\environment\request.php on line 11
Note that the D:\Hosting\6926666\html\libraries\joomla\environment\request.php on line 11
alayws changes (not the same path).
Hint: In local my site working very good, the problem happened when the site and the database are on the server, or the database is only on the server.
My site is Joomla! 1.6 and my host server is godaddy.com
I changed to another template and the website sped up considerably. I then did some investigating and found the index.php of the template has been HACKED and there was EVAL() and BASE 64() code in there! As soon as I put in a "clean" version of the PHP, the website was back to normal!
I agree with Jav_Rock : godaddy is not "compatible" with Joomla.
So, to solve your problem, you should find another hosting provider!
Are you using any third-party extensions? If so, how many? The more you use, the more likely it is that one of them has introduced an 'issue'.
I'd like to suggest that you review the code and database structure associated with each extension to determine where there might be a problem. I'm currently doing this with a fairly simple site that is using a LOT of extensions (so many that it breaks the admin UI). Some of them are terribly written - e.g. making many database lookups when 1 is required, writing to the database with every single request, storing several values in a single column, etc.
However, you say you're a 'beginner', so I think your best bet is either:
Get someone who is not a beginner to do the above review for you
Experiment by enabling/disabling each extension in turn to determine which are problematic. That's not very scientific, but might get you somewhere.
If you can, try enabling debug (you can do that via the Global Configuration). This will show you - amongst other things - how many database queries are required to generate your page. I can't give you an absolute figure for what is 'good' or not, suffice it to say that fewer is better! So, for example, if you identify a low-priority extension whose removal saves you 25% of the total, you might well decide to disable that module - and, possibly, look for an alternative. As a guide, the site I'm working on originally made 500-600 queries for the home page. In my opinion that is far, FAR too many.
for:
Fatal error: Maximum execution time of 30 seconds exceeded
first create a php.ini file and place the code only 'max_execution_time=120' means execution time is 120 Sec. or 2 Minute. or modify according to you..
and put that file into you joomla administrator folder.
Or you can directly edit the php.ini if you have access on it.
also enable cache for the required modules.
Related
Domain of my blog is codesaviour.
Since last month my blog and wp-admin dashboard has slowed down to a frustrating level. I have already removed post revision after reading from speeding up wordpress.
Here is the Google PageSpeed Insight report of my blog. According to it server responding time is 11s.
I even read following threads in stack overflow :
link. I tried to implement the steps but blog is still slow,no change.
My host is Hostgator.in,their online assistance asked me to enable gzip compression as instructed at link,So I followed the instruction, as I was not having .htaccess file on server I created one and pasted the code mentioned in previous link,but nothing helped. It is slow like before, even online reports doesn't show that gzip is even working.
Here is a report from gtmetrix that includes Pagespeed and YSlow reports.Third Tab Timeline shows that it took 11.46s in receiving.
Main problem is server response of 11s (google pagespeed report) or 11.46s(gtmetrix report).
Google suggests to reduce it under 200ms ,How can I reduce it?
#Constantine responded in this link , that many wordpress website are going through same slow phase.
I am using following plugins:
Akismet
Google Analyticator
Google XML Sitemaps
Jetpack by WordPress.com
Revision Control
SyntaxHighlighter Evolved
WordPress Gzip Compression
WordPress SEO
WP Edit
Every time I select add new plugin following error is reported,
An unexpected error occurred. Something may be wrong with
WordPress.org or this server’s configuration.
Also whenever i am installing any plugin using upload option, its giving me error :
Can't load versions file.
http_request_failed
Please help me,in order to increase speed of my blog and dashboard, also suggestion for the errors I am receiving.
Edit
Automatically , without any changes , 11.46s has been reduced to 1.26s .
I will focus on the speed issue. Generally, when things start to be slow, it is a good idea to test by gradually switching off the features until it is fast. The last thing you switched off before it is fast is slow. Then look at that thing in details. Try to split the given task to subtask and do it again, until you find the exact cause of the problem. I would do that with the plugins as well. After the testing is finished, I would put the features back.
Use an effective caching plugin like "WP Super Cache". It drastically improves your page"s load time. Optimizing your images is also essential for your site"s speed. WP-SmushIt performs great for this issue.The last plugin which I highly recommend you is WP-Optimize.This plugin basically clean up your WordPress database and optimize it without doing manual queries. It sometimes gives error when you installed the same plugin more than ones. Firstly, you should delete the plugin from your ftp program instead of using wordpress platform. Otherwise, its not working properly due to errors. Then try to install again the same plugin which you had already deleted.
If you're going to maintain a site about programming then you really have to fix the performance. It really is awful.
The advice you get from automated tools isn't always that good.
Looking at the link you provided the biggest problem is the HTML content generation from GET http://codesaviour.com/ which is taking 11.46 seconds (there are problems elsewhere - but that is by far the worst) - 99% of the time the browser is just waiting - it only takes a fraction of a second to transfer the content across the network. Wordpress is notorious for poor performance - often due to overloading pages with plugins. Your landing page should be fast and cacheable (this fails on both counts).
even online reports doesn't show that gzip is even working
The HAR file you linked to says it is working. But compression isn't going to make much impact - it's only 8.4Kb uncompressed. The problem is with the content generation.
You should certainly use a Wordpress serverside cache module (here's a good comparison).
DO NOT USE the Wordpress Gzip plugin - do the compression on the webserver - it's much faster and more flexible.
In an ideal world you should be using ESI - but you really need control over the infrastructure to implement that properly.
Diagnosing performance problems is hard - fixing them is harder and that is when you have full access to the system it's running on. I would recommend you set up a local installation of your stack and see how it performs there - hopefully you can reproduce the behaviour and will be able to isolate the cause - start by running HPROF, checking the MySQL query log (I'm guessing these aren't available from your hosting company). You will howevver be able to check the state of your opcode cache - there are free tools for both APC and ZOP+. Also check the health of your MySQL query cache.
Other things to try are to disable each of the plugins in turn and measure the impact (you can get waterfalls in Firefox using the Firebug extension, and in chrome using the bundled developer tools).
You might also want to read up a bit on performance optimization - note that most books tend to focus on the problems client-side but your problems are on your server. You might even consider switching to a provider who specializes in Wordpress or use a different CMS.
symcbean's answer is good, but I would add a few things:
This is a server-side issue
This has been said by others, but I want to further emphasize that this is a server side issue, so all those client-side speed testing tools are going to be of very limited value
HostGator isn't high-performance hosting
I don't know about India, but HostGator in the US is generally very slow for dynamic, database driven sites (like yours). It absolutely shouldn't take 11 seconds to load the page, especially since your site doesn't look particular complex, but unless you're serving a totally static site, HostGator probably won't ever give you really stellar performance.
Shared hosting leaves you at the mercy of badly-behaved "neighbors"
If you're using one of HostGator's standard shared hosting packages (I'm assuming you are), you could have another site on the same machine using too many resources and crippling the performance of your site. See if you can get HostGator to look into that.
Why not use a service built for this?
This looks like a totally standard blog, so a service like Tumblr or Wordpress.com (not .org) might be a better choice for your needs. Performance will be excellent and the cost should be very low, even with a custom domain name. If you aren't experienced in managing WordPress and don't have any interest in learning how (don't blame you), why not leave all that to the experts?
You need to make some adjustment to improve your speed up WordPress.
The first step is: clean some unwanted plugins you had in WordPress.
The second step is: delete the theme you not used.
The third step is: compress all images with lossless quality.
The fourth step is: Clean up the database.
If you have done all these steps you will fix your WordPress. You want more details to check out this link: How to fix WordPress dashboard slow.
Other than the usual suggestions, if you are hosting your MySql db on another host from the web server, check the latency between the two. Wordpress is unbelievably chatty with it's db (50+ db calls to load each dashboard page, for example). By moving the db onto the same host as the web server, I got excellent performance.
I'm dealing with some loading speed issues on a ModX Revolution (2.2.2-pl) installation. I believe the problem is rooted in the fact that hundreds of sites are hosted and accessible from the same administrator window, but unfortunately I don't have a say in that setup.
It seems that the ajax calls are puttering - fully loading the sidebar takes about 10 seconds, and saving takes about 15 seconds.
I was dabbling in some database stuff recently and came across some information on indexing. Space on the server is not a big concern, so is there something in the database I could index to speed up these calls?
Have you looked into the javascript & css options, limits in the system settings?
The newest versions of modx are continue improving in speed [i.e. keep it updated to the latest]
Browsers will make a difference as well, I find Chrome is the fastest ~ I'm assuming it has the fastest javascript engine.
Other than that you should be able to customize the manager in such a way that you have a different administrative login for each context [each website] by creating admin users that only have access to each given context. You should be able to get lots of help for that one in the modx forums.
*EDIT***
There are several options in the manager that you can tweak/enable/disable that may help:
- Use Compressed CSS
- Use Compressed JavaScript Libraries
- Use Grouping When Compressing Javascript
- Maximum JavaScript Files Compressed
- Manager JS/CSS Compression Cache Age
Though most of these will already be 'optimized'
I think there is a way to disable the resource browser refreshing when you save as well, but I can't seem to find it now.
If your administration has already been restricted to different contexts I don't think you should even be able to see a resource that is not in the context you have access to. i.e. if you can still see other resources, then you still have ~some~ sort of access. It sounds to me like your problem is the time it takes for the resource browser to redraw, not the actual time it takes the server to process it [right?]
What version of Modx Revo are you using? There have been alot of recent performance improvements in past few releases.
Related Post
I have a Drupal site built on a shared host and I'm finding that the site is very slow to respond. I susepect it's the host and not my Drupal/database configurations but I don't know how to decipher the results from Pingdom.
I have also read Explanation of Pingdom Results but am unsure of how to resolve my problems.
Pingdom results show a Load Time of 60 seconds.
Performance Grade tab shows results of all items at or near 100.
According to the Page Analysis tab, most of the time is spent on the Wait state.
Does the above indicate a problem with my hosting or perhaps domain name provider or is there something that I can do to improve performance of my website?
I should also mention that I've used other tools like Google's Page Speed Chrome plugin and Firefox's Yslow plugin and both give an above average rating to my webpages which leads me to believe it's an issue with my host.
Drupal has this issues of abusing database queries especially if you use a lot of modules on one page and do not cache anything. That may slow down your site considerably. I use Pressflow Drupal`s profile to reduce some load times I also ad Varnish to server (you can look at Memcache too) I also add Boost module to the site itself. But the most important thing is to get query per page load number right. If have written some custom code optimize it. Look for ways to get same data without sending queries to the server maybe some data was already loaded to the page and you do not need tome queries.
In your particular case I think that some lose loop which does not end but has safety trigger which kills it after certain amount of time. I can bet that the reason is somewhere in your custom code or some underdeveloped module. Try to enable display of all the error.
P.S. Example of such page would be the best way determine what is wrong.
My website is http://secretpassagesbooks.com/. It runs on the latest version of wordpress and is hosted via GoDaddy on a shared web server.
My website takes at anywhere from ten seconds to one minute to load, and I don't understand why. I have tested in IE, FireFox, and Chrome, and the page speed is the same. I performed several speed tests at various online speed test sites and have an average load time of 5 - 6 seconds. Yet when I click on a link to my URL or enter it directly it takes in excess of 30 seconds (sometimes more than a minute) to load the index page.
Here is what I have done so far to troubleshoot the issue:
I have the YSlow and Page Speed extensions installed in Firebug
Yslow test gives me a "Grade A -Overall performance score 90"
My Page Speed a score is 94/100
I have the W3Cache wordpress plugin installed and am using page, browser, and database object caching
I've tried minimizing as much CSS and JavaScript as possible
The site is using HTTP compression
Is there anything more I can do with this design, or is it case of my shared web server being overloaded? Thanks in advance for all your help.
YSlow, etc detect problems in the HTML, Javascript and CSS parts, and these are probably OK. It looks like your hosting is to blame.
If those plug-in results are correct (and I've no reason to doubt they are), then it's most likely a case of your virtual server simply being overloaded.
I presume you have no such issues running an identical site in a "local" production environment either, although you might want to try this to confirm if you've not already done this.
Incidentally, a tale-tell sign of an overloaded VPS/shared hosting solution is if the first page load is incredibly slow, but subsequent loads are "normal" - a common reason being that your "decicated" sandbox is being awoken from a sleep/low resource state. (This also seems to be the case as far as your site is concerned.) As such, it's possible (I don't know the details of this server, such as whether you have a "guaranteed" resource level for CPU, memory, etc.) that other sites on this particular server are using more than their fair share of bandwidth until your site kicks in.
Based on some tests from a tool that I built (The Performance Grader at JoomlaPerformance.com), wow is it bad...
Notice that the HTML took approximately 21.83 seconds to download (from the initial request, to the last object being downloaded). Not to mention that the page is nearly 300kb (which is fairly large for only having 7 images)...
This is where the issue is. Notice that the connection and DNS phases are fine, but the generation phase is really REALLY slow. That's where your problems are. It's server-side. So, you need to debug why it's slow. Some areas to look at are the SQL queries that are being executed (and if they are slow), any slow plugins, etc. Try disabling things one at a time to see if each makes a measurable difference or not.
My "hunch" is that your database is either overloaded, or your queries are very expensive. So in short, you can try another host to see if that helps (which is the solution more than you'd think)...
As most of you pointed out, the issue seemed to be with the server. I contacted GoDaddy and explained the situation. It turns out that my site was hosted on one of their legacy servers and was most likely overloaded. They switched me over to one of their grid servers (no cost) and now everything is loading quickly. Thanks for all the responses. I spent a lot of time tweaking the design, removing plugins one by one, reducing as many HTTP requests as possible, and generally went crazy trying figure out how to best optimize my site. After a few days and a lot of tests, I could not accept that the problem was client-side, especially after all the optimization test I ran showed my site was ok. So good to have it settled...for now, at least.
GoDaddy's webhosting is the bottleneck to your website, you should probably go for a VPS if you have got an advanced website with loads of lookups!
I am looking to set-up a Magento (Community Edition) installation for multiple clients and have researched the matter for a few days now.
I can see that the Enterprise Edition has what I need in it, but surprisingly I am not willing to shell out the $12,000 odd yearly subscription.
It seems there are a few options available to be but I am worried about the performance I will get out of the various options.
Option 1) Single install using AITOC advanced permissions module
So this is really what I am after; one installation so that I can update my core files all at the same time and also manage all my store users from one place. The problems here are that I don't know anything about the reliability of this extra product and that I have to pay a bit extra. I am also worried that if I have 10 stores running off this one installation it might all slow down so much and keel over as I have heard allot about Magento's slowness.
Module Link: http://www.aitoc.com/en/magentomods_advanced_permissions.html
Option 2) Multiple installations of Magento on one server for each shop
So here I have 10 Magento installations on one server all running happily away not using any extra money, but I now have 10 separate stores to update and maintain which could be annoying. Also I haven't been able to find a whole lot of other people using this method and when I have they are usually asking how to stop their servers from dying. So this route seems like it could be even worse on my server as I will have more going on on my server but if my server could take it each Magento installation would be simpler and less likely to slow down due to each one having to run 10 shops on its own?
Option 3) Use lots of servers and lots of Magento installations
I just so do not want to do this.
Option 4) Buy Magento Enterprise
I do not have the money to do this.
So which route is less likely to blow up my server? And does anyone have experience with this holy grail of a module?
Thanks for reading and thanks in advance for any help - Chris Hopkins
Let's get the non-options out of the way right away. You don't want to do #3 and #4 is a non-solution. Magento Enterprise Edition doesn't add any features that will let you run multiple customers from one store.
Now, onto possible options. As you state, #1 will allow you to update one version of the code, but of course this comes with some risks. As I understand it, your customers will need to access the stores? If you have multiple customers running on one database and one codebase, you will always run into issues with them affecting each other. For instance, who will control product attributes, which are by nature global? If one store deletes a product attribute, other stores may lose data as a result. If you solve this issue, what about catalog promotions and product categories, etc. Magento was built to handle multiple websites, but not to insulate them from each other, and you will experience problems because of this. As for performance, a large product catalog or customer base will tend to slow down the site, but you can mitigate this using the flat product catalog and making good use of caching.
For option #2, you can run multiple Magento stores, which brings up two main problems. First, as you state, is updating the sites. If you are using a vanilla installation of Magento and not modifying core files, this should be a nonissue. Magento's updater is pretty simple for those installations, with difficulty increasing as you do more mods and have to use more manual processes for upating.
As far as performance, running multiple magento sites might be slower, but it depends on how you structure them. Regardless of having one or multiple sites, you'll have to load data for each site, so database size won't be terribly different. File size on a server is pretty much a nonissue. In either case, when a customer requests a page, Magento has to spin up the entire framework to serve the request, which is where performance issues start to show themselves. One of the big mitigations for this is to use an opcode cache like Xcache, but with multiple machines you need to give Xcache much more memory to hold all the installations' code. Legitimate problem.
My recommendation? Start on one machine, multiple installs. Work your way up on number of installs, and when the server doesn't support any more, move on. Keep your code changes out of the core and use extensions that can be easily updated, so updates are easy. That should mitigate as many of the concerns as possible.
Hope that helps!
Thanks,
Joe
We handle a couple dozen magento "installs" using a single code base, but multiple databases. Essentially we've done a rough job of creating a multi-tenanted Magento.
Here's how we're doing it:
Use nginx as a reverse proxy to handle some basic routing rules, and to set some server variables (fastcgi_params) based on the request.
We hardcode routing rules into Nginx Config based on the requested domain, browser language, and visitor location.
Set a server variable using Nginx fastcgi_params as "client-id"
Make copies of the app/etc folder with the convention of app/[client-id]/etc
override Mage.php variable $etcDir to $etcDir = self::getRoot() . DS . $_SERVER['CLIENT_ID'] .'/' . 'etc'; (You'll have to apply some logic here, to ensure that this can fail elegantly)
Edit app/etc/[client-id]/local.xml to point to a fresh db with magento tables and base content already imported. (You'll have to also set the URL in the core_config table, or in the local.xml file for anything to work)
Modify the include path of app/code/local/ to be app/code/local/[client-id]/ in Mage.php (yes, shoot me for overriding core code, but it's the only way we could find)
Setup Session Handling to be done in a Redis db, with db # unique per client
Override getVarDir() in Mage_Core_Model_Config_Options to include [client-id] in the path. (This is to ensure that you aren't sharing cache between clients)
You probably get the gist so far.
Other things you'll want to consider:
Isolation of media by client-id,
Consolidating all "Admin Panel" urls, and asking Admin user to select [client-id],
Configuring Varnish in a way that is sensible,
Setting up CDN in a way that is sensible,
Modifying Magento installer to support this method, and handle basic configuration automatically.
I think getting a vps account and scaling it up when it becomes necessary will give you the best options for your cost requirements.
For my two cents I think you are going to run into more problems than pros by throwing everyone into a single installation of Magento with everyone bumping into one another. Not to mention Customer X on Website Y cant seem to figure out why he can't create an account on Website Z that he has never been to before (that is a configuration issue, but it could happen)
What I would recommend you do is setup a git repository that has your "base" Magento installation and then have all your clients on different versions setup that you could clone from that main install.
This will give you only one real code base to update (database changes are a different story) and everyone is separate.
We run multiple clients on a single Magento CE installation and use AITOC's Advanced Permissions module to control visibility for our different clients. The module works well, although it has several hiccups and lacks functionality in a handful of areas, which we've had to handle with our own in-house modules. It doesn't seem to have any noticeable effect on performance, as we've been running it this way for months now without any issues. (That said, we do use Amazon EC2 and auto-scaling.)
As I understand it, EE does provide advanced role permissions that would render AITOC's module useless. However, I have also heard that the EULA for EE requires only 1 client per installation. I haven't been able to find hard facts on this, but if it's true, it's truly a deal-breaker, as having an additional EE installation for each client would get tremendously expensive, tremendously quickly. (Maybe someone can confirm yes/no on this, though?)