Sharepoint 2010 - Caching data - caching

I am developing a intranet sharepoint 2010 and to optimize the performance I would place much of data that is used for example to fill comboboxs and ListViews in the cache. What is the best approach to implement this feature? My idea was to use the ASP.NET cache.
Note: Much of the data are returned directly by web services and others are external sharepoint lists. The site will be installed in a web farm.

I suggest using the ASP.NET cache. Then you do not have to create your own and implement thingies such as thread support, dependencies etc.

Like most of the SP farms out there, if you have multiple web front ends (WFE's) in your farm then simple asp.net caching methodology is not going to be enough and might not work at all in some cases.
You need a product like NCache/Appfabric. It gets complex if your sharepoint environment has multiple farms distributed across the world.

Related

Use of Apache JMeter for conducting http flood with an ASP.NET WebForms site

i want to conduct an http flood to a test website that i have designed in Visual Studio 2017. It is an ASP.NET Webforms site, so i want to ask if Apache JMeter is a proper tool for such a project. I have done some research and found from other users that Apache JMeter is having some problems with ASP.NET apps in some cases. So i'm a little confused. Also, i am considering to use two computers, one for running the website, and the other for running the JMeter script, in order to avoid the resource consumption that may lead to inaccurate metrics. Is it possible to succeed the http flood in such a way? Any other suggestions are welcome.
Thanks.
JMeter doesn't have any problems with ASP.NET websites (as well as any other websites), JMeter is backend-agnostic and it knows nothing about server-side technologies stack as it basically gets HTML and Headers from the server.
Just make sure to perform correlation of dynamic parameters like VIEWSTATE, EVENTVALIDATION, etc. and you should be good to go.
With regards to "flood" approach - I would rather recommend implementing real life user scenarios, to wit JMeter test should represent real usage of your web application by the real user using the real browser including business steps (login, browse, search, etc.) and technical side of things (Cookies, embedded resources, headers, cache)

IIS maximum worker processes

I've been researching this quite a bit and just wanted some professional opinions on this. I am working on an eCommerce site that is really slow for submitting orders. Would creating a web farm be beneficial? If not, what would - server, or network wise (load balancers, etc...)?
Assume the app is optimized as much as can be for now and we need to look at other alternatives.
Environment:
Windows 8 RC 2
IIS 7.5
SQL Server 2008
Ideas
IIS and database on separate server
Load balancers
Maybe we can find a cheaper solution.
If it is only the submit process which is slow perhaps you can just improve the code and use some background workers that won't block the main thread.
Otherwise what about just upgrading your server?
Static file compression.
Create separate application pools for static and dynamic pages with in the website.
Get the heavy feature onto a separate app pool. Check if that helps.
Do optimization on SQL server end. Indexing?
Other than this I would look in to the code. IIS performance is highly dependent upon the pages being executed.
You may also want to check from browser developers tool as to which part of the request is actually clocking the most amount of time. That will give you a better idea of which aspect of performance you should really be concerned about.

MemoryCache object and load balancing

I'm writing a web application using ASP .NET MVC 3. I want to use the MemoryCache object but I'm worried about it causing issues with load balanced web servers. When I google for it looks like that problem is solved on the server ie using AppFabric. If a company has load balanced servers is it on them to make sure they have AppFabric or something similar running? or is there anything I can or should do as a developer for this?
First of all, for ASP.NET you should look at the ASP.NET Cache instead of MemoryCache. MemoryCache is a generic caching API that was introduced in .NET 4.0 to provide an equivalent of the ASP.NET Cache in non-web applications.
You're correct to say that AppFabric resolves the issue of multiple servers having their own instances of cached data, in that it provides a single logical cache accessible from all your web servers. Before you leap on it as the solution to your problem, there's a couple of things to consider:
It does not ship as part of Windows Server - it is, as you say, on
you to install it on your servers if you want to use it. When
AppFabric was released, there was a suggestion that it would ship as
part of the next release of Windows Server, but I haven't seen
anything about Windows Server 2012 that confirms that to be the case.
You need extra servers for it, or at least you're advised to have
them. Microsoft's recommendation for AppFabric is that you run it on
dedicated servers. Which means that whilst AppFabric itself is a free
download, you may be incurring additional Windows Server licence
costs. Speaking of which...
You might need Enterprise Edition licences. If you want to use the
High Availability features of AppFabric, you can only do this with
servers running Enterprise Edition, which is a more expensive licence
than Standard Edition.
You might not need it after all. Some of this will depend on your application and why you want to use a shared caching layer. If your concern is that caches on multiple servers could get out of sync with the database (or indeed each other), some judicious use of SqlCacheDependency objects might get you past the issue.
This CodeProject article Implementing Local MemoryCache Invalidation with Redis suggests an approach for handling the scenario you describe.
You didn't mention the flavor of load balancing that you are using: "sticky" or "stateless". By far the easiest solution is to use sticky sessions.
If you want to use local memory caches and stateless load balancing, you can end up with race conditions the cross-server invalidation messages arrive late. This can be particularly problematic if you use the Post-Redirect-Get pattern so common in ASP.Net MVC. This can be overcome by using cookies to supplement the cache invalidation broadcasts. I detail this in a blog post here.

Moving to a New Stack - AJAX, REST & NoSQL

All,
I'm beginning to explore what frameworks (open source) and tools for building web applications. What should I select and learn for the following layers,
Layer 1
Client side JavaScript / AJAX library or framework that will invoke REST style services provided by layer-
2
Layer 2
Provides a framework to rapidly create REST style services out of existing applications and out of a NoSQL document oriented database provided by layer-3. I need this layer in cases where I need to expose REST style services out of my traditional apps and RDBMS.
Layer 3
Which NoSQL to use - CouchDB or MongoDB that would work well with layer-2?
Will I need a MVC framework like RoR or a web/component framework like Wicket? Am I missing anything?
I also need recommendations for which tooling/IDE (and associated plugins) for the development environment. Thanks in advance for your answers/thoughts.
We've had pretty decent luck using a Java stack:
For the presentation, we use jQuery and jQueryUI, with Freemarker for XHTML/CSS templating, including to invoke REST web services through various UIs.
Restlet (www.restlet.org) is a wonderfully rich framework for crafting REST web services in Java. We decided to use it on a major product after it was strongly recommended to us by the engineering director of a top 10 e-commerce site in the US. And everything he said about it was true.
Unless you know you're going to face a really large amount of write volume, you're probably best off using one of the tried and true SQL databases supporting ACID transactional guarantees. We used Oracle, then switched to PostgreSQL, using the MyBatis (formerly iBatis) SQL Mapper to shield our code from the details of the database. With the advent of 64-bit addresses and scads of inexpensive DRAM, plus SSDs, these old workhorses do scale quite high.
If you are anticipating very large amounts of writes, by all means consider a so-called "NoSQL" database. I heard very good things about Vertica from the top network ops folks at a major technology company last week. MongoDB and CouchDB both look interesting. Or you may be able to leverage persistent distributed cache technology like Redis or EhCache to offload a traditional database.
The task you're trying to accomplish determines the technology you use.
If you're interested in the .NET platform, consider:
RavenDB for the data layer.
WCF Data Services for your REST services layer. Here's a guide to WCF Data Services with some great video tutorials.
If you need a web front-end, consider ASP.NET MVC or plain HTML as you see fit. jQuery and AJAX modules are well suited in both.
Arguably, ASP.NET MVC IS a REST endpoint of sorts, so you could skip the WCF services.
All the above can be leveraged with free tools, including:
Visual Studio Express
Raven DB's HTML interface at http://localhost:8080/

Querying list items and using SharePoint web services vs the object model

My company is looking into writing a custom application that will need to perform many list item queries across multiple site collections. It will need to run for WSS 3.0 and it 'would be nice' if it worked on WSS 2.0 as well. It won't be designed for MOSS/SPS but again it 'would be nice' if it worked on these platforms. There is no restriction on which .NET version should be used for the solution.
For this type of application, what would be better: the object model/API or SharePoint web services? The primary factor I'm considering is performance, followed by features and functionality. Thanks!
Object model is better as you can gain access to additional features and the full detail of the list items, such as the version history.
The object model is also better for performance (as long as you dispose() your spsite and spweb objects properly).
The Sharepoint object model has some differences between 2 and 3, but if you look at the reference for v2 then it will also work fully with v3.
The web services have not changed at all between v2 and v3, which explains why they do not have any new features of v3.
The reason the object model will win on performance is that you will not be serialising the data as Xml and then transmitting a large chunk of Xml, and then deserialising the Xml. The object model spares your memory and bandwidth.
The first thing to consider is "will my code run on a sharepoint server or remotely ?"
If it's running remotely, you don't
have any choice, use web services
If it's running on a sharepoint
server, I would suggest using object
model, as performance will be
better, you'll have access to more
API and authentication will be
easier (=automatic).
+1 to the other posters.
If you decide to go the OM route then you can compile for both WSS 2.0 and WSS 3.0 from the one source. These should get you started.
Developing for Sharepoint 2003 using Visual Studio 2008?
How to reference two versions of an API?
Can the OM be used inside an Infopath form? Currently I'm using the web services to pull in the list data I want but I would rather use the OM.

Resources