My site goes slow and stops access certain services externally if we check the Process monitor we see that it is normally due to the ‘w3p.exe’ process – which is the background process for running the website – it regularly reaches 99/100% - killing the process/restarting the WebPublishing service reolves tis – my webhost says this can only be due to bad coding ....can someone comment on this ??…
When performance testing a reasonably straightforward website (coded in ASP.Net) I saw it slow to a crawl with memory use going through the roof over time. Each time recycling the w3wp process restored performance back to normal.
I never got around to figuring out why (the load we were testing with was way above normal, and it could have been worked around anyway by recycling the w3wp service more frequently), but my bet would have been that it was viewstate causing the slowdown. A lot of pages had very large viewstate which wasnt being used in any way - I can fesably see how loading large viewstate values might cause memory related performance degregation over time.
What language is the site coded in? I recently ran into the same problem on a server running IIS6/PHP and found the following bug -
http://bugs.php.net/bug.php?id=37575
Upgrading PHP to 5.3 solved the issue.
Related
as the single user / developer on a drupal website im experience serious performance problems. several issues occur:
usually i develop drupal on our company dev server but now im at a client's office. the IT guys installed a VM with WAMP on the server they usually use for .net development. on the first day of dev (installing drupal, required modules and configuring them) httpd.exe would max out the cpu and loading any page would take minutes. IT guys just scratch their heads.
i then just installed WAMP on the local machine they gave me: some 299,99 Win XP Dell piece of sh*t, nevertheless a P4 2.8Ghz 2GB Ram. the fan blows so loud the entire office is giving me dirty looks. Again httpd.exe maxes out. again, any page (esp admin ones) takes minutes to load
in firefox, the views UI is completely unworkable. alot of stuff is loaded with ajax and it again takes minutes to see the various html elements dynamically inserted in the UI to appear - try to imagine this.
Chrome seems to handle the JS a bit better but it still takes way too long to complete any kind of action.
the devel_themer module ads tons of markup to the page which leads to "Allowed memory size of X exhausted" errors (memory_limit = 128MB ).
now im at the themeing stage where i need to do a LOT of page refreshes. I NEED firebug which requires firefox which in its turn eats up CPU and RAM. What usually takes seconds now takes minutes and by the time whatever action is completed, i forgot what i was doing. im basically reading news stories in between every page reload.
now, i know drupal is resource intensive but that its impossible to develop on a typical Dell / Win XP machine is a bit much, no? at home i work on an iMac and everything runs silky smooth.
i cant imagine im the only guy with this problem since what im doing is basically drupal 101 (no custom modules so far ...). unless someone can offer a solution, im concluding that you basically can not develop a typical drupal site on a normal home desktop computer.
what gives?
So you have abandoned the VM,check you php.ini file for the memory limit, increase it and see if there is a performance boost. its usually set to a default of 16M.
HTH
I'd suggest you either make sure to spend some time actually tuning your XP system, because the default WAMP config is definitely suboptimal, or consider an alternative, like Zend Server community edition (ZCE). Although not completely free as in speech, it is free as in beer, and simply builds up on top of a better default config for Apache and MySQL.
Although less convenient than WAMP or ZCE since not bundled, a manual install of Apache 2.2 is also usually a good choice.
Also note that, that the way devel_themer works, it is constantly building files in your temp directory, meaning that unless that directory is cleaned regularly, files will accumulate and directory browses will become exceedingly slow. Only a cron.php run will cause drupal to clean those files, for an up-to-date version of devel. See my patch adding this cleanup at http://drupal.org/node/303443
Finally, you mention Firebug, and you might be using the Drupal for Firebug module, which has known performance issues, apparently related to infinite recursion in some cases; although recent versions are supposed to fix this problem. See for instance http://drupal.org/node/303443
A couple things I've run into that could potentially help.
Unless you actually need it, turn off Locale. It causes a ton of extra queries (at least the last time I looked into it, this may have changed) so if you're not using it then don't put the unnecessary load on your DB.
Just like on a regular development machine, make sure MySQL is properly tuned and configured. This goes for any setup; local, development or production. 3/4 of the time the database is the bottleneck so start there.
If you've got the devel module installed and enabled it should have a query log you can tell it to output at the bottom of the page, this should help you with number 2.
I'm involved with a project using DotNetNuke version 05.01.04 Community Edition. We are building our new Intranet using it, but performance is terrible.
We have five people adding pages and content to it and every 15-30 seconds they experience a pause of 10 seconds or longer before the system continues and the next screens loads.
The server is Windows 2003, 3.8GHz with 1GB of RAM. I'm told by our server admin that the CPU and memory performance don't appear to be the bottleneck.
We currently have 350 pages in the system, we a plan to add 1000. So we need to resolve this performance problem so that we can enter content and so we can go live.
I just can't see where the bottleneck is. Is there a good why to determine the bottleneck when using DotNetNuke?
Modules installed
Publish:Engage (Not currently in
use)
Page Blaster (Doesn't appear
to providing caching when users
logged in using Integrated
Authentication)
SimpleGallery
XMod
Content Manager
IIS Setup
Application recycling completely disabled (Apart from a 2am recycle)
New findings: 18th March 2010
The main bottleneck was due to version 5.1.4 having a bug which caused 1300 database roundtrips on an average page, due to broken database in-memory caching. We've upgraded to 5.2.4 which has resolved this bottleneck.
Now the next biggest bottleneck is the navigation. We've used both DDR:Menu and DDN:Nav, but both have a major impact on performance.
Is there a navigation interface out there that doesn't drain performance so badly?
I think you need to start investigating this using performance profiling tools. For the DNN application itself I'd grab something like JetBrains DotTrace or Red Gate's ANTS Performance Profiler.
For the database SQL Server Profiler would be the first choice or a tool such as Red Gate's SQL Response.
Without profiling the application these you're going to be pulling at straws.
And as Tim pointed out in his comment, installing Firebug in Firefox with the YSlow add-in to see what resources are taking longest to serve to the browser.
Mitchel Sellers has some good tutorials and checklists to go through with regards to performance in DNN. Start with Explaining High Performance DotNetNuke Configuration and Management (which points to some of his earlier articles).
I have several years of dnn development and maintainance experience, when I have this kind of problem, I start doing things from database clean up. Next thing is, find for missing indexes, and/or rebuild all the indexes periodically (sql job scheduled for that) but major performance gain would be from clean up of table
Another good considerations would be, disabling trace, debug mode to false and turn off features of dnn that you don't use (scheduler is the first one to turn off)
Edit: consider keep alive as well
Hope this helps
Is your database on that server? If so, just throw in some more RAM, or get a faster disk array...
Have you considered creating this lot of pages directly through TSQL? It's not hard to do and may save you a lot of time.
I will apologize in advance as this post is born out of severe frustration.
I have a classic asp website that has been running on Windows 2000/IIS5 for years, and another ASP.NET 2.0 site that we've recently started running on the same servers. So far, everything is running well.
Last year, I tried upgrading (fresh install) to Windows 2003/IIS6. The classic ASP site was much slower, about 50% slower based on logs/stats averaged over weeks of use. I tried everything to find out what was slow. Network tweaks. Integrated. Classic iis5 mode. In process. Out of process. Nothing ever made things better and I soon rolled back to IIS5/2000. The very day rolled back, performance went right back to where it was. This happened on more than one server. Eventually, I gave up and chalked it up to 2003 TCP issues of some sort.
I recently installed a Windows 2008/IIS server on a similar, but more powerful machine in hopes that things were better. Much to my happiness, my ASP classic app is faster under Windows 2008. Unfortunately, my ASP.NET app is 50-75% slower for now apparent reason. All of it's content loads. It's on the same network as the 2000 machine. The site was copied directly from the other machine, and it's a precompile web app from studio 2005.
While the page does hit the database and another server for initial data, it caches it from there for quite a while, it also uses the same db servers as the classic site, which is fast, so I know it's not necessarily a connection issue.
I've tried the default app pool and the classic .NET pool Made no difference. Upped./check the max threads, max per cpu in all the usual locations, web garen or no nothing seems to matter. I've double checked that the compilation debug=false is still set in the web.config.
For a quick benchmark, I used ab.exe (Apache Bench) to send 10 request, 1 at a time. Even if I use IE or Firefox to hit the site, it's clearly slower than under 2000, even according to firebug.
At this point, I'm frustrated and at a complete loss as to where to start. Has anyone been through this sort of mess before?
Speed depends on many factors. You do need to measure performance just on the server to understand if this is server issue. Enable tracing for your web site in web config and see which part/function is slowing it down. You can add you own tracing after each operation to see which block of code is slowest. I'm sure you will find things that you can improve/optimize once you which part of the page is the slowest.
In my case, the answer turned out to be simple one I fired up WireShark. There was 1 external resource request that could not be resolved since the test machine had no direct access to the internet like the live machine did.
It's always the little things.
We use Grid Control 10.2.0.4, with a catalog repository database also at 10.2.0.4. It seems that after a week or two of being up, the response time of the web interface gets very poor (20+ seconds to navigate to a new page, when normally 2-3 seconds is seen). The only thing we've found to overcome it is a restart of the catalog database and the GC/OMS. No errors reported in the alert log, just unbearable slowness. Are there any Oracle DBA's using GC out there who have seen this (and hopefully found a solution)?
We migrated to the same version you are on about a month ago and have not experienced the problems you are. At least not yet. Have you checked the OMS logs? Is there excessive CPU usage and/or disk usage?
It appears that the problem is a symptom of a larger issue with the OMS. The direct cause seems to be that the OMS_HOME/sysman/recv/clob directory contained nearly 100K files. Why that is happening is the subject of an open SR with Oracle. Under Oracle's direction, I deleted nearly everything in that directory, and the performance has improved significantly.
Update, 1/15/2009:
Oracle has opened a bug report for this issue. Anyone on a windows platform running GC 10.2.0.4 should keep an eye on this directory and periodically clear it out if it's not happening automatically.
OK we are at the end of our rope here, and I’d really appreciate feedback from the SO community.
Our basic issue is slow performance by our MOSS-based intranet--
Some environment info:
We have a MOSS standard edition for a collaboration based site.
The sitedb is 29 Gb
we have two VMWare based front end
servers. (2x 32bit CPUS each)
less than 1000 users spread over all
timezones
We have one big site
collection with subsites.
General symptoms:
Loading front page and pages that have been ‘warmed-up’ is pretty decent- but pages/sites off the beaten path are very slow to load.
We see spikes, where a page all of a sudden takes 30 seconds to load, vs its more normal 2
Here’s what we have done already:
Scaled way back on crawling
enabled object and blob caching
optimized VMware setup
followed the Microsoft IT whitepaper on MOSS sharepoint best practice (esp list size etc)
I don’t know what else to do here—Split into multiple site collections?
Switch to 64 bit front-end servers?
Would be great to hear from others who have been in similar situations.
You don't say how much memory your front end servers have - given that they are 32bit, I'll assume the maximum per worker process of roughly 2gb + change.
My advice? Switch to 64 bit, add more memory, and check that you are not using just one w3wp worker process per front end. Have a dig into "web gardens," that is to say where you configure multiple w3wp processes per front end. To start with, start with two workerprocesses per front end and see how that works out. Also make sure they are set to recycle, and that the recycling of each pair of worker processes do NOT overlap - having two+ workers means they can take turns to recycle without cutting access.
just my 0.02.
-Oisin
I think your very first task is to determine where the problem actually is -
until you know that you are wasting your time changing things.
Is the database server on a separate server or one of your web servers?
Do you see a CPU/Disk bottleneck on your front end or db servers?
It sounds like your world wide; do you see the same performance problems from networks close to the server - is it a WAN issue?
Thanks for some helpful advice all, one thing I just learned was that our object caching has basically not been doing anything! This is because the way it seems to work is that if you have rights to edit ANYWHERE on the site collection, it per default disables object caching across the portal. Since all users have rights to at least something, this means caching was doing basically nothing!
We discovered this by enabling the cache debugging, which puts a small comment in the html about what cache is being used. After changing the setting "Allow writers to view cached content" in the authenticated cache profile,
We are seeing what this does for editors, but for regular viewers, the anecdotal evidence is that it is having a big impact!
Yes, caching is the best way to reduce load on the system.
Adding RAM to the SQL server is also good. (64 bit is really a must for your SQL server, WFE not so important).
Not sure if you want to recycle the processes though. I have not evidence for this except a conversation with someone saying the recycling the processes looked to have solved one performance issue, but was introducing others.
Did I mention caching?
The SQL server should handle database up to 100Gb, but at that size they will be hard to manage for backups and the like, so splitting your site into relevant site collections is something you may need to plan for now, but this may not be relevant to performance.
Did you take a look at Plan for software boundaries (Office SharePoint Server)?
At first glance, your server fits in their recommended settings.
To improve performance you should take a look at :
64-bit servers
Limiting the number of items displayed in your document lists
(source: microsoft.com)
Definitely check the disk usage. If you have two VMs and they run of the same disk / SAN, make sure it isn't too busy. Overloaded SANs kill performance
I'll throw my hat in the ring and recommend 64 bit as well. Perhaps not as an immediate fix, but going forward, I would put a move to 64 bit as a goal for your entire farm.
I'd also take some issue w/ Nat's comment that it doesn't matter on the Web Front Ends. I won't debate benchmarks or argue memory addressability. It's really simpler than that.
Microsoft has publically stated that 2007 is the last version of SharePoint that will run on 32 bit servers. Going forward 64 bit is going to be a requirement - so like the FRAM oil filter guy says - "pay me now or pay me later"...
we have also performance problems. We upgraded on win 2008 64bit witch make some difference but not as much as expected.
The bih boost gave us changing from NTLM azthentication to Kerberos. This was our major improvement.
hope this helps to someone
I run a very similar setup, however we have over 400GB spread over 3 site collections. There are a few other things you can attempt before going down the 64bit path.
Ensure there is no Disk or Bandwidth issues between the DB server and web front ends. The network speed to the database server is critical.
Check the database server itself, if the disks are on a SAN there may be performance issues if there is contention for the phsyical drive on the san. RAM is also important, the more it can cache to memory the less time waiting for disks to respond.
Schedule crawling jobs to run outside of regular business hours
You've enabled object caching which can be huge help, be sure to ensure compression is enabled too! For lower bandwidth users sharepoint sends a large amount of CSS information to users that compresses down to 1/10th it's size if compression is enabled.
Heres another big one, ensure your companies DNS is working correctly in other locations and look into if this problem is related to external sources. IE we have a sonicwall firewall that is brutal on response times for offices connecting through it for anything.
Micrsoft has a whitepaper on performance counter monitering. It is very thorough and will help you narrow down the problem between CPU/RAM/Network/HD IO.
Hope this helps!