Why does debugging keep timing out in IIS7? - visual-studio

When I am debugging on my Windows 7 IIS7 machine, I get this error during a debug:
The web server process that was being
debugged has been terminated by IIS.
this can be avoided by configuring
application pool setting in IIS. see
help for further details.
What am I doing wrong?

When you are debugging, IIS will not service any other requests until you are done stepping through your code. That includes the "ping" request that IIS sends to itself. Since IIS doesn't hear back from itself, it decides to shut itself down, which promptly terminates your debugging.
The solution is to increase the Ping Maximum Response Time in the application pool settings from its default value of 90 seconds. Set it to something high enough that will give you enough time to debug your code (like maybe 300 seconds).
Microsoft has a long-winded write-up here.
Edit: Others have suggested setting "Ping Enabled" to false. There are several reasons why I prefer to keep it in place, just with a larger interval, but the most important is that you will (most likely) have worker processing pinging enabled on production, and you should strive to develop and debug under a configuration that is as close to production as possible. If you do NOT have ping enabled on production, then by all means disable it locally as well.

http://weblogs.asp.net/soever/archive/2009/06/18/debugging-sharepoint-asp-net-code-smart-key-codes-disable-timeout.aspx
Your App Pool -> Advanced Settings -> Ping Enabled to False

IIS has a health-checking feature which periodically checks to see if an IIS worker process is hung or otherwise unusuable. If a worker process is stopped in the debugger, it looks unhealthy from the perspective of IIS, and IIS kills it and spins up a new process.
To change this behavior (on your dev workstation-- don't want to disable this in production!) go to the IIS management tool, select the Application Pools node in the left pane, and right-click on the app pool that your app lives in, and choose "Advanced Settings". From there, in the "process model" section, set "Ping Enabled" to False. You may also want to set the idle timeout to be a very large number.
See this IIS.NET article for more discussion of this issue and a screenshot. See this TechNet article for how to set these settings via code/script outside the admin tool.

If you have microsoft's scom running and configured where you work (assuming this is not a for-fun project) and you are able to create a management pack for it or know someone who is, that may help you pinpoint what is causing the issue. I realize its a long shot, but if that does describe your scenario that is what I would do if no other solution is found.

Related

Trouble - IIS suddenly high memory usage

What happens is that a web service on my IIS server significantly increases the ram used.
It works in the range of 200 ~ 700 mb. But for a few days now, he suddenly starts using 3, 4, 5 gb of ram.
As a palliative solution to not block users, I end the service by the task manager itself and it goes back to normal, but some time later it increases again:
task manager photo
I used the performance monitor and saw that it increases this part here:
performance monitor photo
I really don't know how to solve this, I'm stuck, can anyone help me?
There are a lot of reasons that your IIS worker process could be using a lot of CPU, to start, you should look at which web requests are currently executing with IIS to see if that helps you identify the issue to be able to troubleshoot IIS worker process.
Via the IIS Worker Processes
Via the IIS management console, you can view the running worker processes. You can view which IIS application pool is causing high CPU and view the currently running web requests. after selecting "Worker Processes" from the main IIS menu, you can see the currently running IIS worker processes. If you double-click on a worker process, you can see all of the currently executing requests.

Which local machine components could affect a RDP-session performance-wise?

I've got the following totally reproducible scenario, which I'm unable to understand:
There is a very simple application, which does nothing else than calling CreateObject("Word.Application"), that is creating an instance of MS Word for COM interop. This application is located on a Windows Terminal Server. The test case is to connect via RDP, execute the application and the application will output the time taken for the CreateObject call.
The problem now is that the execution time is significantly longer, if I connect from a specific notebook (HP Spectre): It takes 1,7s (+/- 0.1s).
If I connect from any other machine (notebook or desktop computer), then the execution time is between 0,2-0,4s.
The execution times don't depend on the used RDP account, or screen resolution, or local printers. I even did a fresh install of Windows on that HP notebook to rule out any other side-effects. It doesn't matter if the HP notebook is connected via WLAN or an USB network card. I'm at a loss understanding the 4x to 8x execution time difference to any other machine.
Which reason (component/setting) could explain this big difference in execution time?
Some additional information: I tried debugging the process using an API monitor and could see that >90% of the execution time is actually being spent between a call to RpcSend and RpcReceive. Unfortunately I can't make sense of this information.
It could be the credential management somehow being in the way.
Open the .rdp file with notepad and add
enablecredsspsupport:i:0
This setting determines whether RDP will use the Credential Security Support Provider (CredSSP) for authentication if it is available
Related documentation
https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ff393716%28v%3dws.10%29
According to your information about RpcSend and RpcReceive time consumption, it could be the case you have some service stopped on your client machine, like DCOM server or some other COM-related (they usually have "COM" or "transaction" in their names).
Some of that services could be started/stopped (if Manually mode selected) by system to/after transfer your request, but there is a time delay to starting service.
I suggest you to open Computer Management - Services or run -> services.msc and compare COM-related services running on your "slow" client and on your "fast" clients, and try to set Automatically running instead Manually or Disabled.
Also, try to run API Monitor on such processes to determine the time-consuming place more precisely.

IIS Orphaned Requests

We have IIS 7 running a Classic ASP app and I've been noticing the following issue lately. Over the course of the day, if I look at Server Node --> Worker Processes some requests seem to fill up there. The elapsed time is something crazy like 12 hours at the end of the day. This requests all sit in the ExecuteRequestHandler stage.
There is no way anything is executing for that long, and I cannot seem to reproduce the issue. I have tried dumping w3wp.exe, using FRT, and all that good stuff, but I have some general questions:
Is there a setting that controls WHEN IIS stops a request? To be specific, in development, if I purposely design a page to be slow (i.e. update a SQL table thats locked) and then CLOSE out of browser, and monitor the requests in IIS, I see that the request still sits there for about 20 seconds before being removed. Is that 20 seconds a random interval, or can that be SET somewhere? To be clear, it's not that the page takes 20 seconds to execute, it will execute forever (in this test case) but it seems IIS gives up on it after 20 or so seconds after I log out.
Is there some way to see "orphaned" requests, I.E. requests in the app pool that nobody is waiting for anymore
What else can I do to try and debug this? A dump of w3wp says there are client connections with an HTTP request state of HTR_READING_CLIENT_REQUEST.
I keep getting suggestions of modifying IIS config settings such as AspRequestQueueMax, every time I try looking those up in the ApplicationHost.config I don't see those items set, so either I'm looking at the wrong place, or a default value would not be explicitly set in the config. This begs 2 questions: a) How do you READ these config values, i.e. get current value, b) how do you SET these.
A Classic ASP request will keep running until the script timeout is reached, regardless of whether the client is connected or not. I believe the default is 90 seconds, but an .ASP file can override this by setting the Server.ScriptTimeout property directly (which is pretty common). If your request queue is filling up then this is likely the reason and changing the defaults will not help.
If you can edit the ASP code, you can add logic like this in potentially long running sections:
If Not Response.IsClientConnected Then Call Response.End()
You can also global search your code for Server.ScriptTimeout to understand from where the abuse is coming.
If you do want to change the default script timeout, here is where it is stored:
https://www.iis.net/configreference/system.webserver/asp/limits
To change via the IIS7 GUI go to: (web site) > (features view) > ("IIS" category) > "ASP" > expand "Limits Properties" node > "Script Time-out"

Troubleshooting MVC4 Web API Performance Issues

I have an asp.net mvc4 web api interface that gets about 54k requests a day.
http://myserv.x.com/api/123/getstuff?whatstuff=thisstuff
I have 3 web servers behind a load balancer that are setup to handle the http requests.
On average response times are ~300ms. However, lately something has gone awry (or maybe it has always been there) as there is sporadic behavior of response times coming back in 10-20sec. This would be for the same request hitting the same server directly instead of through the load balancer.
GIVEN:
- System has been passed down to me so there may be gaps with IIS confiuration, etc,.
- Database: SQL Server 2008R2
- Web Servers: Windows Server 2008R2 Enterprise SP1
- IIS 7.5
- Using MemoryCache aggressively with Model and Business Objects with eviction set to 2hrs
- Looked at the logs but really don't see anything significantly relevant
- One application pool...no other LOB applications running on this server
Assumptions & Ask:
Somehow I'm thinking that something is recycling the application pool or IIS worker threads are shutting down and restarting thus causing each new request to warmup and recache itself. It's so sporadic that it's tough to trouble shoot right now. The same request to the same server comes back fast as expected (back to back N requests) since it was cached in about 300ms....but wait about 5-10-20min and that same request to the same server takes 16seconds.
I have limited tracing to go by as these are prod systems so I can only expose so much logging details. Any help and information attacking this or similar behavior somebody else has run into is appreciated. Thx
UPDATE:
The w3wpe.exe process grows to ~3G. Somehow it gets wiped out and the PID changes so itself or something is killing it every 3-4min I see tons of warnings in my webserver (IIS) log:
A process serving application pool 'MyApplication' suffered a fatal
communication error with the Windows Process Activation Service. The
process id was '1732'. The data field contains the error number.
After 4-5 days of assessing IIS and configuration vs internal code issues I finally found the issue with little to no help with windbg or debugdiag IIS tools. Those tools contain so much information even with mini dumps or log trace stacks that they can be red herrings. Best bet was to reproduce it by setting up a "copy intelligently" instance of a production system, which we did not have at the time and took a bit for ops to set something up.
Needless to say the problem had to do with over cacheing business objects. There was one race condition where updates on a certain table were updating an attribute to that corresponding business object (updates were coming from multiple servers) which was causing an OOC stackoverflow that pretty much caused the cacheing to recursively cache itself to death thus causing the w3wp.exe process to die and psuedo-recycle itself. It was one of those edge cases that was incredibly hard to test and repro in a non-production environment.

problem with starting WAS in debug mode

I want to start my WAS server(on windows) in debug mode.But my application is huge and server time outs while starting in debugging mode.I have increased the timeout value to 1000 .What should i do?
If you're running your server by using Rational Application Developer, it is possible that your server is basically waiting on a breakpoint. Sounds silly, but hey, happened to me a few times before.
If it's a regular installation:
If it's a Network Deployment topology, make sure that the deployment manager and the node agent are up and running. Look at their log; they're often overlooked.
If it's a single-server topology, then it's possible that your debug port (7777 by default) is in use. When that happens, the debugger process never quite "connects" to WebSphere.
To sum it up: it starts with the logs...
Try changing connection settings on your server from using RMI to SOAP. If you are behind a firewall and you application is timing out on startup then it may help you. Also do you get any other information when the timeout occurs?

Resources