I don't expect a straightforward silver bullet answer to this, but what are the best practices for ensuring good performance for SharePoint 2007 sites?
We've a few sites for our intranet, and it generally is thought to run slow. There's plenty of memory and processor power in the servers, but the pages just don't 'snap' like you'd expect from a web site running on powerful servers.
We've done what we can to tweak setup, but is there anything we could be missing?
There is a known issue with initial requests once an IIS application pool has unloaded the SharePoint resources or recycled itself where the spin-up on a new request is very slow.
Details about why that happens and how to fix it can be found here; SharePoint 2007 Quirks - Solving painfully slow spin-up times
Andrew Connell's latest book (Professional SharePoint 2007 Web Content Management Development) has an entire chapter dedicated to imporving performance of SharePoint sites.
Key topics it covers are Caching, Limiting page load (particularly how to remove CORE.js if it's not needed), working with Disposable objects and how to work with SharePoint querying.
2 really good tricks I've got are to use the CSS Freindly Control Adapters to generate smaller HTML for the common components (menus, etc) and setting up a server "wake up", so when IIS sleeps the app-pool due to inactivity you can reawaken it before someone hits your site.
Microsoft has released a white paper on this very issue.
How Microsoft IT Increases
Availability and Decreases Rendering
Time of SharePoint Sites Technical
White Paper Published: September 2008
Download it from here.
SharePoint has a lot of limitations that contribute to low performance problems we may call them performance bottlenecks. The SharePoint performance problems occur primarily due to the following reasons:
BLOBs overwhelm SQL Server
Too many database trips for lists
You can dramatically improve SharePoint performance if you use a few of intelligent techniques which are:
Externalize Documents (BLOBs)
Cache Lists and BLOBs
Microsoft Office SharePoint Server (MOSS) is an extremely popular product that improves effectiveness of organizations through content management and enterprise search, shared business processes, and information-sharing across boundaries for better business insight. And StorageEdge is an extremely fine product that enhance/improve SharePoint performance.
By using StorageEdge SharePoint's performance can easily be enhanced.
Just a few ideas...
Is displaying your pages as slow when doing it from the Server or from a Client? If slower from client, do check your network.
Are your pages very "heavy"? (means, many elements, web parts and so on?) Than maybe it's normal.
Have you noticed they load slowlier since you've add one specific web part? Maybe there's some issues with that specific web part (for example, it's accessing to a document library that has many -thousands of- documents). If that's the case, try to deactivate that specific web part and see if the performance works better.
I noticed that Sharepoint loves to add a ton of JavaScript. If you run a Browser with slow JavaScript (say, Internet Explorer), i notice that it sometimes does not "feel" fast.
Also, if you are running custom code on it: Make sure to dispose your SPWebs after use, that can up a lot!
Are you running on virtual or physical servers? We found that it is significantly faster on physical servers. Also, check the disk performance - if you are running the servers from a SAN it might be a sign that your SAN is over utilised.
To investigate SharePoint performance issues, I would try these things first, in that order:
Run SQL profiler for those non-performing pages. SharePoint API excels at hiding what's going on behind the scenes in respect to database roundtrips. There are single API calls that, without the knowledge of the developer, generate many roundtrips, which hurt performance.
Profile w3wp.exe process serving your SharePoint site. That's going to tell you relative API usage. Focus on ticks, no time, and do a top-down inclusive time analysis to see which calls are taking up most of the time. See here for instructions.
Run Fiddler or Microsoft NetMon to spot potential excessive client roundtrips (i.e. between browser and web front end server) and redirections (301's).
The three main major components of SharePoint setup is SharePoint Server (the one which runs WSS/SPS services), SQL Server DB and IIS.
You said you have decent power for your SharePoint services and I assume IIS would definitely be on a good machine.
Usually SQL Server setup that hosts the SP related DBs that would slow down the page loads. I would want you to take a look at your whole SQL Server related performance counters and you might want to tune these DBs too (which includes OS Server/Stored Procedures/Network, etc)
Hope this adds to your checklist of things you want to take a look at.
Related
Thanks for all the questions and responses posted on here. This site usually shows up whenever I search for information from google, and in many cases, the answers are usually relevant to the issues I needed solved.
I want to preface my question by stating that I've been programming (.NET, XML, T-SQL, AJAX, etc) for less than 2 years, and I still have a lot to learn; so, pardon my ignorance.
Here's my situation (and question): I'm building a social web application, which I know will have much traffic in a short time; as a result,
What are the basic information that I need to have, in order not to be overwhelmed? It's currently a one-man affair, and here is the hosting specification that I plan to start with: 2GB RAM, 600 HDD, 1000 GB bandwidth, and 2.13GHz Duo Core Processor.
I've read about web-farms, but I've never had an opportunity to use them, so I'm not entirely sure how to phrase this question: how can one split the same application on multiple physical servers? How do you make all the files act as one entity? And since every .net application requires a web.config, how is it split among the various files on these multiple servers?
I've built smaller projects before, but this is the first big project I'm building, and to be frank, I'm a little intimidated. So, I would like to ensure I know what I'm getting into before starting.
Thank you.
Based on your background I assume you are developing in a .Net environment? If so, I highly recommend you take a look at Windows Azure. Developing your app against Azure will allow you to deploy your app in Microsoft's cloud platform. Once deployed you can shrink and grow your resources according to demand without having to deal with the relative hassle of setting up multiple servers in multiple locations and managing it all. This allows you to pay for a "little bit" of server up front and if your app gets popular you can easily pay for "web farm" like power and geographic diversity. It also gives you a decent framework for developing an app that will scale relatively well. That's an 18,000-feet overview. If you can put some more details in your question I'm sure you will get more detailed responses. Best of luck!
Your "social web application" will not have any users if it isn't working and deployed. Don't worry about scaling much until the site actually does something useful and has a few hundred users (or at least a few dozen!). Get it working, find people around you who can help when the going gets tough, and keep at it. Otherwise your concerns about needing to scale will never be warranted.
I'm involved with a project using DotNetNuke version 05.01.04 Community Edition. We are building our new Intranet using it, but performance is terrible.
We have five people adding pages and content to it and every 15-30 seconds they experience a pause of 10 seconds or longer before the system continues and the next screens loads.
The server is Windows 2003, 3.8GHz with 1GB of RAM. I'm told by our server admin that the CPU and memory performance don't appear to be the bottleneck.
We currently have 350 pages in the system, we a plan to add 1000. So we need to resolve this performance problem so that we can enter content and so we can go live.
I just can't see where the bottleneck is. Is there a good why to determine the bottleneck when using DotNetNuke?
Modules installed
Publish:Engage (Not currently in
use)
Page Blaster (Doesn't appear
to providing caching when users
logged in using Integrated
Authentication)
SimpleGallery
XMod
Content Manager
IIS Setup
Application recycling completely disabled (Apart from a 2am recycle)
New findings: 18th March 2010
The main bottleneck was due to version 5.1.4 having a bug which caused 1300 database roundtrips on an average page, due to broken database in-memory caching. We've upgraded to 5.2.4 which has resolved this bottleneck.
Now the next biggest bottleneck is the navigation. We've used both DDR:Menu and DDN:Nav, but both have a major impact on performance.
Is there a navigation interface out there that doesn't drain performance so badly?
I think you need to start investigating this using performance profiling tools. For the DNN application itself I'd grab something like JetBrains DotTrace or Red Gate's ANTS Performance Profiler.
For the database SQL Server Profiler would be the first choice or a tool such as Red Gate's SQL Response.
Without profiling the application these you're going to be pulling at straws.
And as Tim pointed out in his comment, installing Firebug in Firefox with the YSlow add-in to see what resources are taking longest to serve to the browser.
Mitchel Sellers has some good tutorials and checklists to go through with regards to performance in DNN. Start with Explaining High Performance DotNetNuke Configuration and Management (which points to some of his earlier articles).
I have several years of dnn development and maintainance experience, when I have this kind of problem, I start doing things from database clean up. Next thing is, find for missing indexes, and/or rebuild all the indexes periodically (sql job scheduled for that) but major performance gain would be from clean up of table
Another good considerations would be, disabling trace, debug mode to false and turn off features of dnn that you don't use (scheduler is the first one to turn off)
Edit: consider keep alive as well
Hope this helps
Is your database on that server? If so, just throw in some more RAM, or get a faster disk array...
Have you considered creating this lot of pages directly through TSQL? It's not hard to do and may save you a lot of time.
My manager wants to know the speed of our website and its load times in different locations of the world, according to some speed testing websites / tools.
What are the standard tools / procedures for this?
WebPageTest is a pretty awesome tool way more detailed than Gomez. Waterfall charts, repeat loads, and even videos of how a page loads. It has a few locations and connection speeds you can choose from. It seems to be down right now, but will probably be back up soon.
Gomez is a tools that I've used at a couple of organizations. It will monitor your site at timed intervals from a list of nodes that you select, for either an individual page or over an entire transaction (think a click-path through your website)
The reporting capabilities are really good, and you can drill down into individual page requests to identify any performance bottlenecks or issues with site performance. There's also alerting options and the list of nodes and update frequency is excellent.
I've never been involved in the actual purchasing or account management however, so I'm not sure how expensive it is.
I manage a database (Oracle 8i) and web server (IIS) for about 50 simultaneous users on average and a theoretical limit of 100 simultaneous users. A mid level system.
We just got a dual-socket Quad-core XEON - 16GB RAM - SAS-RAID-10 beast and I am exploring the possibilities of taking these two separate servers and merging them into two virtual machines both running on the new server (Server 2009 Hyper-V).
1) In general, what are the performance penalties (as well as any gotchas and hidden consequences) of running both the database and web servers as virtual machines on one mega server vs running them on two separate slower boxes? Is it a big NO-NO or it is something worth trying for a mid-level system that will never need to scale?
2) What are the general performance penalties (in percentage) and gotchas for virtualizing just the database server? We run Oracle 8i (but are considering moving to MS SQL Server).
3) If only stress tests can determine an reasonable answer, what would be the easiest way to test these scenarios (tools / configuration).
Thanks in advance for any generous knowledge-sharing.
If you are looking to do this, I would check Microsoft's site and best practices on how to do it. There is a podcast on Deep Fried Bytes that talks about how the Microsoft.com site is setup to use virtual servers and some of their practices on how they implement it. They don't seem to have performance penalties on how they run it, but I am not certain of the details (it also talks about how they use server virtualization like a real organization would and not a company with unlimited amounts of money to throw at a problem).
I believe this is the podcast:
http://deepfriedbytes.com/podcast/episode-8-behind-the-scenes-at-microsoft-com/
With regards to databases, see this question:
Virtualized SQL Server: Why not?
Note that this is specific to sql server, but many of the same principles will apply for oracle.
As for web servers, virtualization is a great idea. It can make it easier to increase reliability and scalability.
I think at your level of concurrent user connections, and the power of the machine, you won't have too many performance issues running SQL Server on a VM.
We have a mix of VMWare ESX VMs and bare-metal OS' running app, web, and DB servers and without a doubt the heaviest loaded DBMS system is on bare-metal machine (Quad proc quad core, etc.). All the little guys, though, live on VMs, and we haven't noticed any problems (even using iSCSI over GigE).
One thing to consider is you won't get any fault tolerance out of a single setup like this because a CPU failure will bring down the entire box, thus blowing up your whole app.
More info on SQL Server HA and Hyper-V, just FYI:
http://blogs.technet.com/andrew/archive/2008/11/10/sql-server-2008-hyper-v-and-high-availability.aspx
Be aware that Oracle has its own guidelines on running in a virtual machine.
The product I work with utilizes Oracle on the back-end, and for heavy use, the overhead of a VM has had negative effects on it.
8i is well past EOL, and was around before virtualization was a Big Thing(tm), so moving to a new edition of Oracle might also be a good plan at the time your migrate to virtualization.
Oracle blog article on 11g in a VM - http://blogs.oracle.com/MingMan/2007/11/oracle_database_11g_successful.html
If you're concerned about timing, also be aware of known clock-drift issues in hypervisors, and available fixes (either from the OS or virtualization vendors).
I recently came across an article dealing with Virtualization Security. I thought it would be worth mentioning here.
OK we are at the end of our rope here, and I’d really appreciate feedback from the SO community.
Our basic issue is slow performance by our MOSS-based intranet--
Some environment info:
We have a MOSS standard edition for a collaboration based site.
The sitedb is 29 Gb
we have two VMWare based front end
servers. (2x 32bit CPUS each)
less than 1000 users spread over all
timezones
We have one big site
collection with subsites.
General symptoms:
Loading front page and pages that have been ‘warmed-up’ is pretty decent- but pages/sites off the beaten path are very slow to load.
We see spikes, where a page all of a sudden takes 30 seconds to load, vs its more normal 2
Here’s what we have done already:
Scaled way back on crawling
enabled object and blob caching
optimized VMware setup
followed the Microsoft IT whitepaper on MOSS sharepoint best practice (esp list size etc)
I don’t know what else to do here—Split into multiple site collections?
Switch to 64 bit front-end servers?
Would be great to hear from others who have been in similar situations.
You don't say how much memory your front end servers have - given that they are 32bit, I'll assume the maximum per worker process of roughly 2gb + change.
My advice? Switch to 64 bit, add more memory, and check that you are not using just one w3wp worker process per front end. Have a dig into "web gardens," that is to say where you configure multiple w3wp processes per front end. To start with, start with two workerprocesses per front end and see how that works out. Also make sure they are set to recycle, and that the recycling of each pair of worker processes do NOT overlap - having two+ workers means they can take turns to recycle without cutting access.
just my 0.02.
-Oisin
I think your very first task is to determine where the problem actually is -
until you know that you are wasting your time changing things.
Is the database server on a separate server or one of your web servers?
Do you see a CPU/Disk bottleneck on your front end or db servers?
It sounds like your world wide; do you see the same performance problems from networks close to the server - is it a WAN issue?
Thanks for some helpful advice all, one thing I just learned was that our object caching has basically not been doing anything! This is because the way it seems to work is that if you have rights to edit ANYWHERE on the site collection, it per default disables object caching across the portal. Since all users have rights to at least something, this means caching was doing basically nothing!
We discovered this by enabling the cache debugging, which puts a small comment in the html about what cache is being used. After changing the setting "Allow writers to view cached content" in the authenticated cache profile,
We are seeing what this does for editors, but for regular viewers, the anecdotal evidence is that it is having a big impact!
Yes, caching is the best way to reduce load on the system.
Adding RAM to the SQL server is also good. (64 bit is really a must for your SQL server, WFE not so important).
Not sure if you want to recycle the processes though. I have not evidence for this except a conversation with someone saying the recycling the processes looked to have solved one performance issue, but was introducing others.
Did I mention caching?
The SQL server should handle database up to 100Gb, but at that size they will be hard to manage for backups and the like, so splitting your site into relevant site collections is something you may need to plan for now, but this may not be relevant to performance.
Did you take a look at Plan for software boundaries (Office SharePoint Server)?
At first glance, your server fits in their recommended settings.
To improve performance you should take a look at :
64-bit servers
Limiting the number of items displayed in your document lists
(source: microsoft.com)
Definitely check the disk usage. If you have two VMs and they run of the same disk / SAN, make sure it isn't too busy. Overloaded SANs kill performance
I'll throw my hat in the ring and recommend 64 bit as well. Perhaps not as an immediate fix, but going forward, I would put a move to 64 bit as a goal for your entire farm.
I'd also take some issue w/ Nat's comment that it doesn't matter on the Web Front Ends. I won't debate benchmarks or argue memory addressability. It's really simpler than that.
Microsoft has publically stated that 2007 is the last version of SharePoint that will run on 32 bit servers. Going forward 64 bit is going to be a requirement - so like the FRAM oil filter guy says - "pay me now or pay me later"...
we have also performance problems. We upgraded on win 2008 64bit witch make some difference but not as much as expected.
The bih boost gave us changing from NTLM azthentication to Kerberos. This was our major improvement.
hope this helps to someone
I run a very similar setup, however we have over 400GB spread over 3 site collections. There are a few other things you can attempt before going down the 64bit path.
Ensure there is no Disk or Bandwidth issues between the DB server and web front ends. The network speed to the database server is critical.
Check the database server itself, if the disks are on a SAN there may be performance issues if there is contention for the phsyical drive on the san. RAM is also important, the more it can cache to memory the less time waiting for disks to respond.
Schedule crawling jobs to run outside of regular business hours
You've enabled object caching which can be huge help, be sure to ensure compression is enabled too! For lower bandwidth users sharepoint sends a large amount of CSS information to users that compresses down to 1/10th it's size if compression is enabled.
Heres another big one, ensure your companies DNS is working correctly in other locations and look into if this problem is related to external sources. IE we have a sonicwall firewall that is brutal on response times for offices connecting through it for anything.
Micrsoft has a whitepaper on performance counter monitering. It is very thorough and will help you narrow down the problem between CPU/RAM/Network/HD IO.
Hope this helps!