We have a TFS 2013 server which I use only for source control. I just got a new desktop PC with Windows 10 and Visual Studio 2017. I am able to connect successfully to TFS, and I can start to pull down code, but after the directory structure has been built locally and files start to come down, several are successful, then the rest fail with a 503 Service Unavailable, and the connection is lost, forcing me to reconnect.
I can connect again a few seconds later, and keep trying, and it happens again.
If I pull down files one at a time, it seems to be ok, but when I try to pull an entire project, it blows up.
This happens on multiple projects on the server.
In general, I see 503 errors in ASP.NET when I overload an application to the point where it crashes the application pool - I don't know if that's what's happening here, but if it is, I'm wondering if maybe VS is pulling down the code too fast, maybe with too many concurrent threads or something, that the older version of TFS can't handle, which crashes it.
If that's the case, is there anything I can do to change that on my machine? Or do you have any other ideas on what could cause this?
As of now, I don't have access to the server to grab the event logs, but I can put in a request if it's not something I can figure out and fix myself.
Last time I encountered this, I was behind a rate-limiting proxy. 503 is a common error code for temporary service interruptions such as rate limiting.
Excessive requests are delayed until their number exceeds the maximum burst size in which case the request is terminated with an error 503 (Service Temporarily Unavailable).
The IIS service that's hosting TFS can also be configured to have request rate limiting.
maxBandwidth Optional uint attribute.
Specifies the maximum network bandwidth, in bytes per second, that is used for a site. Use this setting to help prevent overloading the network with IIS activity.
maxConnections Optional uint attribute.
Specifies the maximum number of connections for a site. Use this setting to limit the number of simultaneous client connections.
Visual studio 2017 has optimized the way it downloads over multiple parallel threads (at least 8 if I remember correctly). This can trigger the rate limiting in some proxies. And lots of small code files can easily trigger a rate limit.
If it's possible to reach the TFS server directly, you could try to add it to the proxy exclusion list to rule out whether the proxy on your end is causing the interruption. But a HTTP server such as nginx could also be implemented as a security measure and may reverse-proxy on the server end, in which case the server admin must change the limits.
I had a similar issue when accessing our tfs-git repo from intellij. All of a sudden, it errored unable to access <repo url>. The requested URL returned error: 503".
What helped:
In intellij->VCS-git-> remotes change the url to a false one (validation fails) and then back to the valid one.
In my case a login request then appeared and I finally regained access to the repo.
Related
We have intermittent issues with MSMQ communication cross domain.
Applications on residing on Domain (client)A and (server)B are communicating. The client can always send messages to the server. The queues (DIRECT:OS format) are created and in "Connected" state on the server. Sometimes, however, when the server responds it generates a
"BadDestinationQueue" error.
This can go on for anywhere between 10min - 1hr after which the error is gone and communication is normal. We see nothing abnormal, no obvious temporal pattern.
Clients in B talking to the server in B have no such issues.
Are there any settings that could cause this? We assume the intermittency suggests some sort of cache, somewhere? The DNS entries are up-to-date, and machines are discoverable by hostname even while MSMQ insists the destination queue does not exist.
We have a third party developer housing our source code on their TFS server. I am in the process of downloading the entire 1.5gb and progress is frequently halted with this error:
HTTP code 500: ( The number of HTTP requests per minute exceeded the
configured limit. Contact the server administrator. )
Visual Studio maintains the connection and continues to retry. The next few requests fail with this error:
TF400324 Team Foundation services are not available from server xxx.xxxxxx.net.
The underlying connection was closed. An unexpected error occurred on a send.
After a while, the throttle is removed and download resumes full-speed until I hit the threshold again.
The third party is unwilling to alter their configuration. Is there something I can do in Visual Studio (2017) or on my PC to slow down the requests so I don't hit their limit?
I run a company webserver that seems to get hit constantly with wpad.dat requests. It fills up my error logs with 404 not found errors. I had considered ignoring the wpad.dat requests in the config, but upon further inspection it seems that some systems try to get this file every couple seconds at times. Some try several times a minute ongoing for days.
Can I create a wpad.dat file that I can serve to these systems to tell them there are no proxy settings so they can stop hammering the server with requests? I know the idea of the wpad.dat file is to provide auto-detected proxy settings. We only seem to have this issue with users logged into our VPN. Their web browsers just sit there and hammer the server with requests. I'd like to give them what they want. Any suggestions?
Serving this file seems to tell the connected system that there is no proxy info and to connect directly.
function FindProxyForURL(url, host) {
return "DIRECT";
}
Currently we have a few test servers which connect to test.sagepay.com to process transactions. However, on 2 of the servers, we could successfully register transactions on sagepay, but then we didn't receive any sagepay notification coming back at all. However, on different servers (running on different IP addresses), it is working perfectly fine.
I've got the error code "5006 - Unable to redirect to Vendor's web site. The Vendor failed to provide a RedirectionURL". It used to work perfectly fine on those servers, and only stopped working since last Thursday although we are sure that we didn't touch those servers during that period of time at all. Besides, we do see a few occasional notifications coming in from sage which we believe are the REPEAT notifications, not the original ones. We could see all those transactions registered on our accounts, but of course all of them are failed due to the fact that we haven't got any notification coming back.
And we also do make sure that our firewall is opened for the whole range 195.170.169.* from which we expect to receive the sage notification
So my questions are:
Does Sagepay have some sort of mechanism to block some IP addresses and stop sending back notification?
Is the Sagepay-serer which sends out original notifications different from the one that sends out Repeat notifications?
I've faced the very same issue. Our script was handing https:// address over to SagePay as a NotificationURL, but https was not setup, hence the notification script could not be reached. Once I changed to http and ensured that the notification script response is correct it worked.
Also it seems that when SagePay could not reach RedirectURL it tried 8 more times.
I'm not exactly answering your questions, but perhaps it will help. I'd add this as a comment, but I can't...
I am using the Windows WinHTTP library to perform http & https queries, and am sometimes getting a WinHTTP 12175 error, usually after several days of operation (sometimes weeks), hundred of thousandths to millions of queries.
When that happens, the only way to "fix" the error is to restart the service, and occasionally the errors will not go away until Windows (the server) is restarted.
Interestingly enough, this error appears "gradually": getting some of them for some https queries, then after some time, the service gets them for 100% of https queries, then later on, they popup even for regular http queries (the service makes lots of http & https queries, on the order of dozens per seconds, to many servers).
The HTTP connections are made directly, without any proxies. The OS is Windows 2012 R2.
AFAICT there is no memory leak or corruption, outside the 12175 errors, the rest of the service is operating just fine, no access violations or suspicious exceptions, service responds to queries normally, etc.
I have a suspicion this could be related to Windows Update doing OS certificate updates, renewing credentials or something, but I could never positively confirm it.
Anyone else has observed this behavior? Any workarounds?