For a .NET component that will be used in both web applications and rich client applications, there seem to be two obvious options for caching: System.Web.Caching or the Ent. Lib. Caching Block.
What do you use?
Why?
System.Web.Caching
Is this safe to use outside of web apps? I've seen mixed information, but I think the answer is maybe-kind-of-not-really.
a KB article warning against 1.0 and 1.1 non web app use
The 2.0 page has a comment that indicates it's OK: http://msdn.microsoft.com/en-us/library/system.web.caching.cache(VS.80).aspx
Scott Hanselman is creeped out by the notion
The 3.5 page includes a warning against such use
Rob Howard encouraged use outside of web apps
I don't expect to use one of its highlights, SqlCacheDependency, but the addition of CacheItemUpdateCallback in .NET 3.5 seems like a Really Good Thing.
Enterprise Library Caching Application Block
other blocks are already in use so the dependency already exists
cache persistence isn't necessary; regenerating the cache on restart is OK
Some cache items should always be available, but be refreshed periodically. For these items, getting a callback after an item has been removed is not very convenient. It looks like a client will have to just sleep and poll until the cache item is repopulated.
Memcached for Win32 + .NET client
What are the pros and cons when you don't need a distributed cache?
These are the items that I consider for the topic of Caching:
MemCached Win32
Velocity
.net Cache
Enterprise Library Caching Application Block
MemCached Win32: Up until recently I have used MemCached Win32. This is a akin to a web farm (many servers serving the same content for high availability) but it is a cache farm. This means that you can install it locally on your web server initially if you don't have the resources to go bigger. Then as you go down the road you can scale horizontally (more servers) or vertically (more hardware). This is a product that was ported from the original MemCached to work on Windows. This product has been used extensively in very high traffic sites. http://lineofthought.com/tools/memcached
Velocity: This is Microsofts answer to products such as MemCached. MemCached has been out for quite some time, Velocity is in CTP mode. I must say that from what I have read so far this product will certainly turn my head once it is out. But I can't bring myself to run big production projects on a CTP product with zero track record. I have started playing with it though as once it gains momentum MemCached won't even compare for those locked in the windows world! http://blogs.msdn.com/velocity/
.NET Cache: There is no reason to discount the standard .NET Cache. It is built in and ready to use for free and with no (major) set up required. It offers flexibility by way of offering mechanisms for storing items in local memory, a SINGLE state server, or a centralized database. Where Velocity steps in is when you need more than a single state server (cache in memory) and don't want to use a slow database for holding your cache.
Enterprise Application Block: I stay away from all of the Enterprise Application Blocks. They are heavy frameworks that give more than I generally require! As long as you remember to wrap everything that touches code that is not your own and follow simple rules for coding, stick to any of the other methods over this one! (just my opinion of course - MySpace leverages as much as they can out of Enterprise Application Blocks!)
You don't have to choose up front! I generally create a cache wrapper that I communicate with in my code for methods such as Get, Set, Exists, Remove, ListKeys, etc. This then points to an underlying level of cache abstraction that can point to MemCached, Velocity, or .NET cache. I use StructureMap (or choose another IoC container) to inject which form of cache I want to use for a given environment. In my local dev box I might use .NET cache in the session. In production I generally use MemCached Win 32. But regardless of how it is set up you can easily swap things around to try each system out to see what works best for you. You just need to make sure that you application knows as little as possible about how things are cached! Once this layer of abstraction is in place you can then do things such as run a compression algorithm (gzip) for all the data that is going in and out of cache which would allow you to store 10 times the amount of data in cache. - transparently.
I cover .NET Cache, MemCached Win32, StructureMap, and the appropriate abstractions in my book if you are interested!
ASP.NET 3.5 Social Networking (http://www.amazon.com/ASP-NET-3-5-Social-Networking-Enterprise-ready/dp/1847194788/ref=sr_1_1?ie=UTF8&s=books&qid=1225408005&sr=8-1 )
Andrew Siemer www.andrewsiemer.com blog.andrewsiemer.com www.socialnetworkingin.net
Update
Changed the link that lists sites using memcached. Thank you David for noticing that it was broken!
Bear in mind that the EntLib documentation specifically steers you towards the ASP.NET cache for ASP.NET applications. That's probably the strongest recommendation towards using it here. Plus the EntLib cache doesn't have dependencies, which for me is a big reason not to use it.
I don't think there's a technical limitation as such on shipping System.Web as part of your app, though it's slightly odd that they've put that notice in on the .NET 3.5 page. Hanselman actually says he started out being creeped out by this notion, but became convinced. Also if you read the comments, he says that the block has too many moving parts and the ASP.NET Cache is much more lightweght.
I think this is exactly the kind of problem that Velocity is going to solve, but that's only a preview for now :-(
I'd say use Web.Caching and see how you get on. If you put some kind of abstraction layer over the top of it, you've always got the option to swap it out for the EntLib block later on if you find problems.
Take a look at memcached. It is a really cool, fast and lightweight distributed caching system. There are APIs for several of the most popular languages, including C#. It may not serve well on the client side (unless of course the client is obtaining the cached data from a server of some kind), but if you abstract your usage of memcached to a specific interface, you could then implement the interface with another caching system.
#Davide Vosti
"If they put it in the web namespace, I think's it's for a good reason."
Does that same logic apply to the Concurrency and Coordination Runtime (CCR) in the robotic studio? no? didn't think so.
Related
We're planning to introduce DMS to our customer's Sitecore installation. It's a rather popular site in our country and we have to use proxy caching server (it's Nginx in this case) to make it high-traffic-proof.
However, as far as we know, it's not possible to use all the DMS features with caching proxy enabled - for example personalization of content - if it gets cached it won't be personalized.
Is there a way to make use of all the DMS features with proxy cache turned on? If not, how do you handle this problem for high-traffic sites - is it buying more Content Delivery servers to carry the load, or extending current server with better hardware (RAM, CPU, bandwidth)?
You might try moving away from your proxy caching for some pages, or even all.
There's no reason not to use a CDN for static assets and media library assets, so stick with that
Leverage Sitecore's built-in html cache for sublayouts/renderings - there are quite a few options for caching
Use Sitecore's Debug feature to track down the slowest components on your site
Consider using indexes instead of doing "fast" or Sitecore queries
Don't do descendants query "//*" (I often see this when calculating selected state for navigation - hint: go the other way, calculate the ancestors of the current page)
#jammykam wrote an excellent answer on this over here.
John West wrote a great blog post on this also, though a bit older.
Good luck!
I've been wondering about this myself.
I have been thinking of implementing an ajax web service that:
- talks to the DMS and returns JSON
- allows you to render the personalised components client side
- allows you to trigger anlaytics events
I have been googling around and I haven't found anyone that has done it and published the information yet. The only place I have found something similar is actually in the mobile sdk, but I haven't had a chance to delve into it yet.
I have also not been able to use proxy server caching and DMS together successfully. For extremely high loads, I have recommended to clients to follow the standard optimization and scaling guidelines, especially architecting for proper Sitecore sublayout and layout caching for as much of the site as possible. With that caching done, follow it up by distributing across multiple Content Delivery nodes with load balancing to help support high volume with personalization at the same time.
I've heard that other CMS's with personalization use a javascript approach to load the personalized content on the client-side, but I would be worried about losing track of the analytics data that is gathered when personalized content is loaded and interacted with.
The following questions were posed by a customer who is about to write a large scale XPages application. While I think the questions are actually to broad to fit stackoverflow style, they are interesting and the collective knowledge of the experts here could yield better results than one person answering them:
How many concurent users can use XPages applications on 1 Lotus Domino server (There are several applications on Lotus Domino server. Not one)?
How can we define and analyze memory leaks on Lotus Domino server, when run XPages application?
How can we write XPages the right way for achiving the best performance and avoding memory leaks?
What code methods and objects should not be used?
What are typical errors when the Lotus Script developer begins to write the code for XPages? What are the best practises?
How can we build centralized, consolidated application on XPages for 10000 - 15000 users? How many servers we need? How to configure XPages application in that case?
How to balace users?
I will provide my insights, please share yours
How long is a string? It depends on how the server is configured. And "application" could be a single form or hundreds. Only a test can tell. In general: build a high performance server preferably with 64Bit architecture and lots of RAM. Make that RAM available for the JVM. If the applications use attachments, use DAOS, put it on a separate disk - and of course make sure you have the latest version of Domino (8.5.3FP1 at time of this writing)
There is the XPages Toolbox that includes a memory and CPU profiler.
It depends on the type of application. Clever use of the scopes for caching, Expression Language and beans instead of SSJS. You leak memory whey you forget .recycle. Hire an experienced lead developer and read the book also the other one and two. Consider threading off longer running code, so users don't need to wait.
Depends on your needs. The general lessons of Domino development apply when it comes to db operations, so FTSearch over DBSearch, scope usage over #DBColumn for parameters. EL over SSJS.
Typical errors include: all code in the XPages -> use script libraries. Too much #dblookup, #dbcolumn instead of scope. Validation in buttons instead of validators. Violation of decomposition principles. Forgetting to use .recycle(). Designing applications "like old Notes screens" instead of single page interaction. Too little use of partial refresh. No use of caching. Too little object orientation (crating function graves in script libraries).
This is a summary of question 1-5, nothing new to answer
When clustering Domino servers for XPages and putting a load balancer in front, the load balancer needs to be configured to keep a session on the same server, so partial refreshes and Ajax calls reach the server that has the component tree rendered for that user.
It depends on the server setup, I have i.e XPage extranet with 12000 registered users spanning over aprox 20 XPage applications. That runs on 1 Windows 2003 server with 4GB Ram and quad core cpu. Data aount is about 60GB over these 20 applications. No Daos, no beans just SSJS. Performance is excellent. So when I upgrade this installation to 64 bit and Daos the application will scale when more. So 64Bit and Lots of Ram is the key to alot of users.
I haven't done anything around this
Make sure to recyle when you do document loops, Use the openntf.org debug toolbar it will save alot of time before we have a debugger for XPages.
Always think when you are doing things this will be done by several users, so try to cut down number of lookup or getElementByKey. Try to use ViewNavigator when you can.
It all depends on how many users that uses the system concurrent. If you have 10000 - 15000 users concurrent then you have to look at what the applications does and how many users will use the same application at the same time.
Thats my insights into the question
I'm writing a web application using ASP .NET MVC 3. I want to use the MemoryCache object but I'm worried about it causing issues with load balanced web servers. When I google for it looks like that problem is solved on the server ie using AppFabric. If a company has load balanced servers is it on them to make sure they have AppFabric or something similar running? or is there anything I can or should do as a developer for this?
First of all, for ASP.NET you should look at the ASP.NET Cache instead of MemoryCache. MemoryCache is a generic caching API that was introduced in .NET 4.0 to provide an equivalent of the ASP.NET Cache in non-web applications.
You're correct to say that AppFabric resolves the issue of multiple servers having their own instances of cached data, in that it provides a single logical cache accessible from all your web servers. Before you leap on it as the solution to your problem, there's a couple of things to consider:
It does not ship as part of Windows Server - it is, as you say, on
you to install it on your servers if you want to use it. When
AppFabric was released, there was a suggestion that it would ship as
part of the next release of Windows Server, but I haven't seen
anything about Windows Server 2012 that confirms that to be the case.
You need extra servers for it, or at least you're advised to have
them. Microsoft's recommendation for AppFabric is that you run it on
dedicated servers. Which means that whilst AppFabric itself is a free
download, you may be incurring additional Windows Server licence
costs. Speaking of which...
You might need Enterprise Edition licences. If you want to use the
High Availability features of AppFabric, you can only do this with
servers running Enterprise Edition, which is a more expensive licence
than Standard Edition.
You might not need it after all. Some of this will depend on your application and why you want to use a shared caching layer. If your concern is that caches on multiple servers could get out of sync with the database (or indeed each other), some judicious use of SqlCacheDependency objects might get you past the issue.
This CodeProject article Implementing Local MemoryCache Invalidation with Redis suggests an approach for handling the scenario you describe.
You didn't mention the flavor of load balancing that you are using: "sticky" or "stateless". By far the easiest solution is to use sticky sessions.
If you want to use local memory caches and stateless load balancing, you can end up with race conditions the cross-server invalidation messages arrive late. This can be particularly problematic if you use the Post-Redirect-Get pattern so common in ASP.Net MVC. This can be overcome by using cookies to supplement the cache invalidation broadcasts. I detail this in a blog post here.
We have 3 front-end servers each running multiple web applications. Each web application has an in memory cache.
Recreating the cache is very expensive (>1 min). Therefore we repopulate it using a web service call to each web application on each front-end server every 5 minutes.
The main problem with this setup is maintaining the target list for updating and the cost of creating the cache several times every few minutes.
We are considering using AppFabric or something similar but I am unsure how time consuming it is to get up and running. Also we really need the easiest solution.
How would you update an expensive in memory cache across multiple front-end servers?
The problem with memory caching is that it's unique to the server. I'm going with the idea that this is why you want to use AppFabric. I'm also assuming that you're re-creating the cache every few minutes to keep the in memory caches in sync across all servers. With all this work, I can well appreciate that caching is expensive for you.
It sounds like you're doing a lot of work that probably isn't necessary. This article has some detail about the caching mechanisms available within SharePoint. You may be interested in the output cache discussed near the top of the article. You may also want to read the linked TechNet article and the linked article called "Custom Caching Overview".
The only SharePoint way to do that is to use Service Application infrastructure. The only problem is that it requires some time to understand how it works. Also it's too complicated to do it from scratch. You might consider downloading one of existing applications and rename classes/GUIDs to match your naming conventions. I used this one: http://www.parago.de/2011/09/paragoservices-a-sharepoint-2010-service-application-sample/. In this case you can have single cache per N front-end servers.
Sharepoint isn't the speediest of server applications, and I've read about a few tips to speed it up. What steps do you think are necessary to increase performance so it can be used to host a high traffic site?
At the end of the day SharePoint is just a complicated web site with all the standard components.
In order to optimize performance you need to analyze each component and determine which one is a problem, and then adjust it accordingly.
We're in the process of implementing a 1000 concurrent user sharepoint website, which may or may not be large, however some steps we are taking are:
Implementing a detailed caching strategy, to cache webpart content intelligently.
Use load balanced servers to ensure all our hardware is utilised rather then lying idle.
We've undertaken capacity planning given the existing solution, so we have a good idea which component is the bottleneck for us. (The SQL Server), so we will ensure the server can cope with expected load and future growth of the site.
We're also using hardware load balancers which will ensure our network and the related servers operate as expected, and again this is something to investigate before you implement a sharepoint website.
We're also ensuring our webparts don't generate unnecessary html, and don't return unnecesary data, as this will slow down loading times.
Something which I definately think is a good idea is to have a goal to work towards, as you can spend a huge amount of money and time optimizing SharePoint, which may prove unnecessary.
My additional best bets are:
use x64 to allow more RAM on your server
Make the best use of your application pool recycling http://blogs.msdn.com/joelo/archive/2007/10/29/sharepoint-app-pool-settings.aspx
Make sure all custom code properly disposes SPWeb and SPSite objects using this http://blogs.msdn.com/rogerla/archive/2008/02/12/sharepoint-2007-and-wss-3-0-dispose-patterns-by-example.aspx
utilize MS Capacity Planning Tool http://technet.microsoft.com/en-us/library/bb961988.aspx
Plan your site collection and database sizes. Keeping your databases and site collections under control will be key
GOVERNANCE GOVERNANCE GOVERNANCE - Plan for site size limits and expiration strategy. Old data should be deleted or archived for better performance. http://technet.microsoft.com/en-us/office/sharepointserver/bb507202.aspx
I cannot emphasize enough that proper early planning is essential for a successful SharePoint implementation.
In addition to caching and hardware, try to make sure that your masterpages and page layouts are not ghosted in the database (requiring a database call to retrieve).
Do this by ensuring the files get released to the 12 hive in your solution.
Don't forget careful selection of the built-in cache settings (choose the right one for your situation).
Use the BLOBCache.
Use IIS Compression/caching (the defaults are not enough BTW).
Ensure your SQL box can keep up, especially during indexing/crawling. Splitting the Application roles (indexing vs search query and dedicated WFE for indexing/crawling) helps.
BTW if you're running VMWare VMs for your WFEs, Windows NLB breaks (though not consistently), so use hardware NLBs or DNS round-robin, etc.
If you don't need > 2gig RAM for the IIS Application Pool on a WFE, don't bother with 64bit on the WFE.
Just my 2c.