Why is SharePoint 2013 incredibly slow to load after setting enableSessionState to true? - session

We are migrating various solutions (some all SharePoint, some built with .NET) from SharePoint 2007 to 2013. It has been challenging to say the least. However, today a .NET application needed enableSessionState set to true. We already have session working (on a SQL Server, we have multiple WFE) and everything has been going fine. However, after setting the value to true, the entire site, almost every application, slowed to a crawl and basically became non-functional. Has anyone out there heard of something like this happening? If so, what was the resolution?
Also, I've been looking, but I cannot find exactly what happens to the system when that particular value is set to true. I know that pages can access session as a result, but what happens to the site/SharePoint when you set the value to true and save the web.config?

Related

~85.5% of native code when profiling with Jet Brain's dotTrace?

Whenever I do a session (both samplig and timeline), it says like up to 70-80 percent of entire exuction is occupied by native code. It seems kind of suspicious, not quite sure whether I got a buggy environment (because due to objective restricitions I am working on a damn Windows 7) or it's actually fine?
It is a normal situation in case of profiling a web application hosted on IIS as IIS is a native application itself. So if all of your app's methods are presented in a snaphot then you have nothing to worry about.
See this page for details: https://www.jetbrains.com/help/profiler/Profile_ASP_Web_Site.html
"Note that it is normal that the snapshots contain large amount of
native code"

Browser Link always asking for "Do you want to stop debugging"

Just wondering my browser keeps asking if I want to stop debugging every time I hit browser link refresh very annoying as is slowing down devtime.
Has anybody else come across this?
cheers
Updated Answer, Root Cause Now Found
After what is now TWO years of seeing this error on and off I finally understand what's causing this. A BIG Thank you goes out to Damian Edwards for mentioning this in a community stand-up!
As a developer, we often do all of our development in Visual Studio in Debug mode rather than release mode. And it's very common for us to run our projects with F5. In this case VS runs the project with the debugger enabled, no surprise there.
So it turns out, the "Do you want to stop debugging? error dialog when you try to refresh via browserlink is saying is "Hey you made some changes that look like they might require recompiling the razor view in order to refresh the page, and in order to do that Visual Studio needs to stop the debugger session, is that OK?"
And the fix? This is gonna blow your mind. When you want to use browser link to rapidly refresh the page while doing html/css changes and never see this message again, do this: run the project using CTL+F5 instead of F5. This will run the project without firing up the debugger and you probably weren't gonna use the debugger anyway if you were planning on doing a bunch of html css work on a view using browerlink. :-) That's it, no more error message. Bam. You're welcome. (It took me T W O Y E A R S to figure that out. Hand against forehead, eyes rolling)
I have left my original answer below because it did seem to help in some cases and it has already received a couple upvotes, but in hind sight, I think it was more of a coincidental observation than a root cause..
Original
I have been struggling with this issue for nearly a year. I may have just discovered the cause. I was running two copies of visual studio, each with different web projects, at the same time. Then when I try to get browserlink to refresh the browser in one copy of visual studio it asks “Do you want to stop debugging”.
I then quit out of the 2nd copy of visual studio, and re-ran the web project in the first copy of visual studio and when I tried to get browserlink to refresh the browser it worked fine with no prompt. Yea. A better error message than “Do you want to stop debugging” might have been "It looks like you are running two web projects at the same time in different copies of visual studio. Browserlink does no support this, please close one of them."
You may want to check out this post: https://stackoverflow.com/a/21706524/4079626. If you are using an older version of IE (like IE9), then long-polling may be the issue.
Short answer
Browser Link will only use WebSockets on Windows 8 or Windows Server
2012
Longer answer
The following would explain the issue if you're using Visual Studio on
Windows 7, Windows Vista or Windows Server 2008:
IIS (Express) depends on the .NET framework implementation in
System.Net.WebSockets to handle WebSocket connections; as you
can read in the link to MSDN, you simply don't get an actual
implementation of the necessary classes when you install .NET 4.5 on
Windows 7.
So in that case, the server can't agree to the client's request to
change from standard HTTP to the WebSocket protocol, which forces the
SignalR client to use one of the fallback options (in your case:
long-polling).

Set up a development environment for MVC2 as it would be for a development team

So at work we have our environment setup as most of you probably do as well. We have a centralized code base (controlled through SVN), which runs off a Database on the same server (Integration). We bring this code base down and copy the database down locally to work on our machines with.
This is what I need to figure out how to set up. I want to setup a Database in SQL Server 2008 locally, have it connected to my MVC 2 app, and also have it locally setup in IIS so I can test it without going into the debugger and running in the Development Server of VS2010 everytime.
So far searching I haven't really found any articles or anything that tell how to set this up, even though I feel like it is the most common thing to do (as most software shops are setup this way).
Any sources or directions would be awesome.
Thanks!
I am running Windows 7 Ultimate, Visual Studio 2010, SQL Server 2008, and IIS (whichever version comes with Windows 7).
The answer is "it depends". Although most of the software shops are setup like this there are tweaks in the setup. Especially when it comes to Database.
I found 2 cases:
In most cases where I worked, I found they have a DEV database server where the developers are given access on a perticular database which the entire team works on. They do install SQL Management Studio on the dev machines for connecting to the server/database.
Some shops have SQLExpress setup on each developer machine where they maintain a local copy of the database (same as yours)... This comes with additional headache of syncing copies of multiple databases. We used it with Visual Studion Database Projects in past and it worked like charm in many cases where we used to get "delta" updates and apply to server database. Obviously - these updates were done by someone who knew the VS DB PRO features and given some dedicated hours to perfrom the sync.
I still prefer a "controlled environment" as opposed to #2 where the schema changes are controlled by only a few...
Just my 2 cents...
Lots of this really depends on the app and local details. But we do the same sorts of things all the time. First and foremost, you'll want your develop some standardization and/or conventions about environment -- it makes life alot easier if everyone agrees that they should be running the local test DB at .\SQLEXPRESS and if they can agree on what the local urls should be.
Perhaps the tallest pole in the tent is automating the database setup -- there are some real challenges there, especially if your app has a significant amount of data it needs to be usable. I haven't found a perfect solution here, typically we use a combination of a utility called sseutil to create database instances and a database migration framework to make schema changes. Something like RoundhousE looks compelling here.

IIS5, 6, and 7 Speed Issues After Upgrade

I will apologize in advance as this post is born out of severe frustration.
I have a classic asp website that has been running on Windows 2000/IIS5 for years, and another ASP.NET 2.0 site that we've recently started running on the same servers. So far, everything is running well.
Last year, I tried upgrading (fresh install) to Windows 2003/IIS6. The classic ASP site was much slower, about 50% slower based on logs/stats averaged over weeks of use. I tried everything to find out what was slow. Network tweaks. Integrated. Classic iis5 mode. In process. Out of process. Nothing ever made things better and I soon rolled back to IIS5/2000. The very day rolled back, performance went right back to where it was. This happened on more than one server. Eventually, I gave up and chalked it up to 2003 TCP issues of some sort.
I recently installed a Windows 2008/IIS server on a similar, but more powerful machine in hopes that things were better. Much to my happiness, my ASP classic app is faster under Windows 2008. Unfortunately, my ASP.NET app is 50-75% slower for now apparent reason. All of it's content loads. It's on the same network as the 2000 machine. The site was copied directly from the other machine, and it's a precompile web app from studio 2005.
While the page does hit the database and another server for initial data, it caches it from there for quite a while, it also uses the same db servers as the classic site, which is fast, so I know it's not necessarily a connection issue.
I've tried the default app pool and the classic .NET pool Made no difference. Upped./check the max threads, max per cpu in all the usual locations, web garen or no nothing seems to matter. I've double checked that the compilation debug=false is still set in the web.config.
For a quick benchmark, I used ab.exe (Apache Bench) to send 10 request, 1 at a time. Even if I use IE or Firefox to hit the site, it's clearly slower than under 2000, even according to firebug.
At this point, I'm frustrated and at a complete loss as to where to start. Has anyone been through this sort of mess before?
Speed depends on many factors. You do need to measure performance just on the server to understand if this is server issue. Enable tracing for your web site in web config and see which part/function is slowing it down. You can add you own tracing after each operation to see which block of code is slowest. I'm sure you will find things that you can improve/optimize once you which part of the page is the slowest.
In my case, the answer turned out to be simple one I fired up WireShark. There was 1 external resource request that could not be resolved since the test machine had no direct access to the internet like the live machine did.
It's always the little things.

MS Team Foundation Server in distributed environments - hints tips tricks needed

Is anyone out there using Team Foundation Server within a team that is geographically distributed? We're in the UK, trying work with a team in Australia and we're finding it quite tough.
Our main two issues are:
Things are being checked out to us without us asking on a get latest.
Even when using a proxy, most thing take a while to happen.
Lots of really annoying little things like this are hardening our arteries, stopping us from delivering code and is frankly creating a user experience akin to pushing golden syrup up a sand dune.
Is anyone out there actually using TFS in this manner, on a daily basis with (relative) success?
If so, do you have any hints, tips, tricks or gotchas that would be worth knowing?
P.S. Upgrading to CruiseControl.NET is not an option.
Definitely upgrade to TFS 2008 and Visual Studio 2008, as it is the "v2" version of Team System in every way. Fixes lots of small and medium sized problems.
As for "things being randomly checked out" this is almost always due to Visual Studio deciding to edit files on your behalf. Try getting latest from the Team Explorer, with nothing open in Visual Studio, and see if that behavior persists. I bet it won't!
Multiple TFS servers is a bad idea. Make sure your proxy is configured correctly, as it caches repeated GETs. That said, TFS is a server connected model, so it'll always be a bit slower than true "offline" source control systems.
Also, if you could edit your question to contain more specific complaints or details, that would help -- right now it's awfully vague, so I can't answer very well.
We use TFS with a somewhat distributed team - they aren't too far away but connect via a slow and unreliable VPN.
For your first issue, get latest on checkout is not the default behaviour. (Here's an explanation) There is an add-in that will do it for you, though.
Here's the workflow that works for us:
Get latest
Build and verify nothing's broken
Work (changes pended)
Get latest again
Deal with merge conflicts
Build and verify nothing's broken
Check in
[edit] OK looks like you rephrased this part of the question. Yes, Jeff's right, VS decides to check some files out "for you," like sln and proj files. It also automatically checks out any source file that you edit (that's what you want though, right? although you can change that setting in tools > options > source control)
The proxy apparently takes a while to get ramped up (we don't use it) but once it has cached most of the tree it's supposed to be pretty quick. Can you do some monitoring and find the bottleneck(s)?
Anything else giving you trouble, other than get-latest-on-checkout and speed?
From my understanding you can have multiple TFS Application servers in different locations. They either can both talk to the same SQL Server or you could use SQL Server mirroring. Having your own local TFS server would likely speed up your development times.

Resources