IIS7.5 session hanging on local development machine - visual-studio-2010

Summary
Sessions within my local IIS7.5 stop responding for no obvious reason.
Details
I'm developing ASP.NET 2.0 web applications using Visual Studio 2010 on a Windows 7 Ultimate 32-bit machine (which is a VMware instance running in VMware Workstation).
For no obvious reason, IIS just appears to stop working for the current session. If I restart the browser, it works... for a short time, and then stops again. If I open a different browser (while the first one is hanging) the new one works... for a short time.
Restarting IIS works (for a short time) or rebuilding the application (for a short time) - but there is absolutely no pattern to when it stops working... and it's driving me insane!!
There is no high-CPU-usage during this time, nor any high-memory-usage.
Nor does it not appear to be browser specific - I generally use FireFox for development, but this also happens on Chrome and IE. Nor is it just on the machine, but also when I test the website on old browsers running in other virtual instances.
I'm not sure when this started happening, so I am unable to say what (if anything) had changed at the time.
Can anybody suggest any reason why this might be happening?
UPDATE
This is now driving me insane - so I've been doing more investigation.
Here is a screen-shot of FireBug which is showing that the actual .aspx request is completing correctly, but for some reason IIS is simply not responding to the request for all the files within the page. The files are definitely there and have been served by IIS many, many times.
I have turned on the logs for IIS, and the only requests it has logged are those that show as successful in FireBug... those in red are missing.
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port c-ip sc-status sc-substatus sc-win32-status time-taken
2013-02-06 11:00:40 127.0.0.1 GET /default.aspx - 80 superuser 127.0.0.1 200 0 0 15
2013-02-06 11:00:40 127.0.0.1 GET /Org/Layout/Css/v0/FrontGeneral.css - 80 - 127.0.0.1 200 0 0 15
2013-02-06 11:00:40 127.0.0.1 GET /WebResource.axd d=IJ9YYVsWm9qkk8kUYcn2sYcQLbYErTn4We9MkwgF6JGUiPeoRWMmAKKsi_AbjNJQ-Je-l4D-1zuU66SBZi_kDHe1u7c1&t=634604425351482412 80 superuser 127.0.0.1 200 0 0 0
2013-02-06 11:00:40 127.0.0.1 GET /Scripts/v0/DefaultButtonFix.js - 80 - 127.0.0.1 304 0 0 0
I have also turned on the "Trace Failed Requests" (using information from here) but that is not producing anything... the directory is empty

Still testing this but have finally found success. Disabling my AVG virus scanner seems to clear this issue right up. If you have a virus scanner/security package dont bother adding exceptions, just blanket disable it temporarily and give it a go. You can add the exceptions back in if this test proves successful.
Know how you feel. This has been driving me nuts for weeks. I have been tweaking FF and Chrome settings with no effect whatsoever.
Best of luck...

Related

random ssl certification failure

I just setup a custom domain for an AWS API Gateway and set up CNAME entries in Google Domains to redirect to my API Gateway. After maybe 30 minutes of waiting I was able to use Chrome to do a simple GET request to my custom domain that properly forwarded to my API Gateway. I tested in Firefox and it worked fine too.
About 3-4 hours later I came back and tried making the same call using Python requests and it worked the first 3 times then failed.
SSLError: HTTPSConnectionPool(host='ids.references.app', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError("hostname '<my_custom_domain>' doesn't match '*.execute-api.us-east-2.amazonaws.com'")))
At first I thought this was a requests problem, but then I opened up Firefox and it didn't work as well. I tried Edge and the call worked. Then I went back to Python and it worked for a bit, then stopped working. I went back to Firefox and it no longer worked. Then I tried Edge and it no longer worked. Sprinkled in there I've tried Chrome and it has worked every time since it started working. (this order of events is from memory and may be slightly off).
Is this a known issue with updating DNS entries that you get some randomness when things first start until the DNS changes have fully propagated. How would I go about even tracking where the error is occurring? I think that's the most frustrating thing about this, it all seems like magic and there's no obvious point where you get something like server 1.2.3.4 says that cert_1 doesn't go with cert_2 and then later you see something like server 4.5.6.7 says cert_2 is all good (so it works). Would I need to install curl for Windows (Is is possible to make a cURL request and get the route that is taken (similar to traceroute)). Would this even matter though? What if curl was like Chrome, it always worked? Does requests have this functionality (bonus points if someone can show a requests solution)? What about Firefox or Chrome? Or could I use something like wireshark (yikes) that could somehow observe the whole system?
I'm using requests 2.25.1 and Python 3.8.5 on Windows 10 and I believe the latest versions of Edge and Firefox.

What does "Blocked" really mean in the Firefox developer tools Network monitoring?

The timing section of the Firefox Network Monitor documentation, "Blocked" is explained as:
Time spent in a queue waiting for a network connection.
The browser imposes a limit on the number of simultaneous connections that can be made to a single server. In Firefox this defaults to 6
Is the limit on the number connections the only limitation? Or is the browser blocked waiting to get a connection from the OS count as blocked too?
In a fresh browser, on a first connection, before any other connection is made (so the limit should not apply here), I get blocked for 195 ms.
Is this the browser waiting for the OS? Was does "Blocked" mean here?
We changed the Firefox setting (about:config) 'network.http.max-persistent-connections-per-server' to 64 and the blocks went away. We changed it back to 6. We changed our design/development method to a more 'asynchronous' loading method so as not to have a large number simultaneous connections. The blocks were mostly loading a lot of png flags for locale settings.
I have a server that takes several seconds to respond, which allowed me to cross-reference the firefox measurement with a wireshark trace. I see that the first SYN is sent out immediately. The end of the "Blocked" time corresponds to when the Server Hello comes back.
I couldn't relate the end of "TLS setup" to any wireshark packet. It extends a few seconds belong the last data that is exchanged on the initial TLS connection.
Bottom line: it doesn't look like the time spent in "Blocked" and "TLS setup" is very reliable, at least in some cases.
My setup has a TLS reverse proxy that forwards the connection with SNI. I'm not sure if that might be related.
Time spent in a queue waiting for a network connection.
The browser imposes a limit on the number of simultaneous connections
that can be made to a single server. In Firefox this defaults to 6,
but can be changed using the
network.http.max-persistent-connections-per-server preference. If all
connections are in use, the browser can't download more resources
until a connection is released.
Source : https://developer.mozilla.org/en-US/docs/Tools/Network_Monitor
It's very clear that the browser fixes the limit to 6 concurrent connections per server (domains/IP), the OS question is not very relevent.
In my case both waiting for network connection and DNS lookup times were pretty high, up to 2 seconds each, caused significant page load times if the page was loaded for the first time. Firefox was freshly installed without addons and just started with no other opened tabs. I tried on both Ubuntu 18.04 LTS and Ubuntu 19.04 with the same results. Although my ISP doesn't provide support, my router assignes IPv6 addresses. As it turned out the problem was the IPv6 broken network, which forced Firefox to fall back to IPv4 (of course after some time(time-out)). After I turned off the IPv6 support in Linux the requests speeded up significantly.
Here is a relavant discussion: https://bugzilla.mozilla.org/show_bug.cgi?id=1452028
I encountered this error whilst using an Angular 9 'dist' deployment. I discovered that the error appeared because I was trying to access an unreachable API, according to the specified IP address and port.
Therefore to solve it, I just have to reference a valid and accessible API.

Gradle extremely slow HTTP resource access on Windows. How to diagnose and fix?

Gradle 2.2 takes hours to build a project on a PC that takes 8 minutes on Linux. When run with –debug, on the slow machine, gradle reports no errors, but it stops and waits for approx. 2 minutes at every resource, after every User-Agent line:
18:39:15.819 [DEBUG] [org.apache.http.headers] >> User-Agent: Gradle/2.0 (Windows 7;6.1;amd64) (Oracle Corporation;1.7.0_67;24.65-b04)
<2 min. delay>
18:41:15.527 [DEBUG] [org.apache.http.impl.conn.DefaultClientConnection] Receiving response: HTTP/1.1 200 OK
18:41:15.527 [DEBUG] [org.apache.http.headers] << HTTP/1.1 200 OK
Linux workstations on the same subnet (behind the same firewall and using the same squid proxy) do not have this delay.
An Extended snip from Windows is here.
Snip from Linux build around same point in build.
This seems to have been a VERY STRANGE issue with a transparent http proxy and DansGuardian web filter. For still unknown reasons, this one PC’s http traffic got mangled.
This is odd, because our entire LAN’s http traffic to the internet is content filtered. There was a filtering exception that allowed any traffic from this slow pc to be unfiltered. But that had the opposite effect as expected. Gradle traffic became crazy slow on the ‘unfiltered’ PC, while content-filtered workstations had no problems. Even stranger, Gradle also ran at normal speed on unfiltered Linux workstations.
The workaround was to configure IPTables and the transparent proxy to completely ignore the slow pc's http traffic. So now it is unfiltered, and unproxied. It has been nicknamed the pornstation.
It happened to us as well, though in our case it was caused by the AntiVirus on the PC (Nod32 not to name it).
We had to completely disable the HTTP/web filters on it.
May not be your case, but may help others coming here for advice.

Rails logging 127.0.0.1 every 5 minutes

I have noticed in my production Rails log that exactly every 5 minutes, I have a GET request to my root url from 127.0.0.1 which apparently is my localhost.
Started GET "/" for 127.0.0.1 at 2012-07-01 14:05:03 -0500
Processing by ApplicationController#landing as */*
Rendered shared/_header.html.erb (0.9ms)
Rendered shared/_footer.html.erb (0.5ms)
Rendered application/landing.html.erb (5.7ms)
Completed 200 OK in 8ms (Views: 7.9ms)
I have never seen this in any other Rails apps. I am using New Relic, MongoDB, Nginx, and Unicorn. Can anyone tell my why this is happening or what it means?
This is most likely a monitoring application, especially since it's only checking the root path for a successful connection (i.e. HTTP 200). Have you installed any tools such as monit? What hosting provider are you using? They may monitor without you knowing.

Subversion unbearably slow on Windows 7

My company is currently using TortoiseSVN 1.6.16 32-bit on Windows XP to connect via HTTPS to a VisualSVN-Server 2.1.19 running on a Windows Server 2003 residing in the same network (no proxy). We use a self-signed certificate and Kerberos authentication using windows credentials (I suppose this is a VisualSVN-specific feature). In this setup, everything works dandy.
When my company decided to move on to Windows 7, we tried TortoiseSVN 1.7.6 64-bit on Windows 7 64-bit which resulted in the following problem:
Any operation involving the server (repo-browser, checkout, update, checkin, ...) is unbearably slow e.g.
opening the repo-browser (10 projects): 15 min
update on a fresh checkout of 50 files: 1 min
checkin of a single empty file: 30 sec
Tortoise shows alternatively normal transmission speeds and 0 byte/s. Many small files seem to be slower than a few big ones.
The slow connection results in various failures when using neon as http-lib (serf is still slow, but operation finishes successfully without errors)
EasySVN, SmartSVN and the SVN command line client that comes with TortoiseSVN show the same behaviour. Same with TortoiseSVN 1.6.16 64-bit.
Changing the server protocol to HTTP (no SSL) does not improve the situation
On the other hand
TortoiseSVN 1.7.6 32-bit on Windows XP works fine with our server
Access via browser/WebDAV works well even under Windows 7
Server side logs do not show errors or even warnings
I found several posts which also complained about slow behaviour on Windows 7, but they didn't fit my bill because they were local operations or were restricted to TortoiseSVN.
As there is no indication that there is a general problem with Subversion on Windows 7, I suspect that it could be our OS' networking parameters or protocol versions. Are there any parameters which are known to influence Subversion's performance?
I have to admit I am not familiar with how exactly Subversion (or rather neon/serf) relies on the OS and on which parts. Any information on that would be greatly appreciated.
Are there any parameters in the subversion 'servers' file which I should test? How would you consider my chances that Wireshark'ing the connection will help me?
Similar experiences, opinions, hints, help and straws are welcome.
Wireshark shows sporadic gaps of ca. 5 sec in the TCP stream apparently caused by VisualSVN Server.
https: the server acknowledges the client hello then waits for 5 secs before sending its server hello
https: the server acknowledges the client key and than takes 5 secs before supplying its encrypted handshake data
https: even outside the handshake, server sometimes sends an ACK (on TCP level) and then waits for 5 sec before sending something back to the client (the data is encrypted so it's hard to tell whether the break occurs at some point of interest)
http: at both server side transmissions during the NTLM authentication
http: before server sending a FIN flag
A typical fail with Windows 7 against an older server is IPv6 networking.
If your machine does not have an SVN server listening on an IPv6 address Windows 7 might still try to do a TCP6 connect first (you can see it in Process Explorer if you look at the open sockets of the TortoiseSVN process while trying an operation), this has a timeout of a few seconds and then retries with IPv4.
Simple solutions are either upgrade your server to an IPv6 capable one or disable IPv6 for the Windows 7 clients.
Another thing you could verify (the answer above didn't work for us) is the Internet Explorer settings especially if you have IE9. We found that by disabling the option Automatically detect settings in the Internet Options -> Connection tab -> LAN settings, SVN started working normally again.
The issue was never properly cleared up. Most probably, the company internal network path between the client and the server was somehow at fault. The matter became obsolete when we moved the SVN server to another machine. The very same setup of server and clients works fine now, even with Windows 7.
I had the same symptom of a very slow repository browse, slow updates, slow everything.
My SVN server has two Ethernet cards, so it has two Ethernet IP addresses. The SVN server was only listening on one of the IP addresses. So a name resolution via WINS or NetBIOS could resolve to the 'wrong' IP address.
TortoiseSVN would retry, eventually the name resolution would find the 'correct' IP address, and things would work.

Resources