I'm stumped by a very strange behavior on my machine.
Trying to port a Remoting application to WCF I wanted to implement a "one proxy per server call" scheme (like proposed here).
I went for a standard net.tcp binding and noticed that the first (and now, only) call for each proxy was incredibly slow: opening the client channel took about 2 seconds!
At first I thought it's because of the default transport security for net.tcp bindings, but switching to Security.None didn't bring any improvement.
After lots of tests I found out that basicHttp binding was about 1000 times faster: opening the channel took about 2 ms!
Then I tried playing around with the service URI. For my tests both server (self hosted) and client were on the same machine (in fact even in the same process).
In the beginning I used "net.tcp://localhost:Port/..." and got 2 seconds for opening the channel.
Just out of curiosity, because I'm working over an RDP connection via VPN, I tried my machine's explicit hostname "net.tcp://myworkmachinehost:Port/..." and now it took 4 seconds to open the channel!
Finally I tried avoiding name resolution and used "net.tcp://127.0.0.1:Port/..." and suddenly everything was blazing fast: Opening the channel took a mere 2 ms!
A colleague of mine got the same 2 second delay with "localhost" on his machine (not working from remote). Using his hostname also gave 2 seconds and using the IP address was fast as well.
With basicHttpBinding there's no performance difference in how we specify the service URL.
Calling "nslookup myworkmachinehost" immediately returns my IPv4 address, so name resolution itself also seems to be fast.
Apart from modifying the server address on client side I also tried all possibilities with the server with very strange results.
Server endpoint address net.tcp://0.0.0.0:Port...
Client URL 127.0.0.1:Port: 6 ms
Client URL localhost:Port: 2005 ms
Client URL myworkmachinehost:Port: 4007 ms
Server endpoint address net.tcp://127.0.0.1:Port...
Client URL 127.0.0.1:Port: 6 ms
Client URL localhost:Port: 20135 ms
Client URL myworkmachinehost:Port: TIMEOUT after 10 s
Server endpoint address net.tcp://localhost:Port...
Client URL 127.0.0.1:Port: 5.5 ms
Client URL localhost:Port: 1.5 ms
Client URL myworkmachinehost:Port: 1 ms
Server endpoint address net.tcp://myworkmachinehost:Port...
Client URL 127.0.0.1:Port: 8 ms
Client URL localhost:Port: 2 ms
Client URL myworkmachinehost:Port: 1.5 ms
How can this be and what can I do to further analyze this situation?
Thanks in advance...
Related
I have a (Spring Boot) server, and a client trying to establish a Websockets connection to the server.
I've run and tested the server on a Linux machine (Ubuntu 20.04), it works fine.
It also ran fine on my Windows (Windows 10 Home) machine up until a few days ago. Now it is acting strange.
I checked the network traffic between client and server in wireshark, both in Linux and Windows.
Here is the linux capture:
And this is the windows capture:
The blacked out IPs are the client's. Both the linux and windows servers are running in the same network, so the problem would not be in a router configuration.
In both cases, the client makes the same request to /location/websocket, but in Linux the server responds successfully in less than 1 second, while in Windows it responds about 13 seconds later, and immediately follows the response by closing the websocket connection.
What looks strange to me are the NBSTAT name queries. I tried several times and there are always three queries between the arrival of the client request, and the closing of the websocket connection.
So maybe the windows machine needs to do a name query to respond successfully? Is this normal? What does the <00>...<00> string in the name query mean? I checked the network traffic while keeping the server up but not connecting to the client, and I didn't see any activity on port 137, so it definitely only happens when the client tries to contact the windows machine. What can I do about this? How can I get the server responding to websocket connections again?
I have a Windows application (APP) and Audio Processing Object (APO) loaded by AudioDG.exe that communicate via gRPC:
APP part that is written in C# creates server via Grpc.Core.
APO part creates client via grpc++.
Server is on 127.0.0.1:20000 (I can see it's up and listening with netstat -ano).
I can confirm that APO is loaded into audio device graph by inspecting it with process explorer.
Everything worked like a charm on Windows 8 and 10, but on 11 it cannot communicate at all - I get either Error Code 14, Unavailable, failed to connect to all addresses or 4, Deadline Exceeded.
After enabling debug traces, I now see "socket is null" description for "connect failed" error:
I0207 16:20:59.916447 0 ..\..\..\src\core\ext\filters\client_channel\subchannel.cc:950: subchannel 000001D8B9B01E20 {address=ipv4:127.0.0.1:10000, args=grpc.client_channel_factory=0x1d8bb660460, grpc.default_authority=127.0.0.1:10000, grpc.internal.subchannel_pool=0x1d8b8c291b0, grpc.primary_user_agent=grpc-csharp/2.43.0 (.NET Framework 4.8.4470.0; CLR 4.0.30319.42000; net45; x64), grpc.resource_quota=0x1d8b8c28d90, grpc.server_uri=dns:///127.0.0.1:10000}: connect failed: {"created":"#1644240059.916000000","description":"socket is null","file":"..\..\..\src\core\lib\iomgr\tcp_client_windows.cc","file_line":112}
What I've tried so far:
Updating both parts to the latest grpc versions.
Using "no proxy", "Http2UnencryptedSupport" and other env variables.
Using "localhost" or "0.0.0.0" instead of "127.0.0.1".
Updating connection to use self signed SSL certificates (root CA, server cert + key, client cert + key).
Adding inbound / outbound rules for my port, and then disabling firewall completely.
Creating server on APO side and trying to connect with the client in APP.
Everything works (both insecure and SSL creds) if I create both client and server in C# part, but as soon as it's APP-APO communication it feels blocked or sandboxed.
What has been changed in Windows 11 that can "block" gRPC?
Thanks in advance!
In your input you write:
Server is at 127.0.0.1:20000
Further looking at the logs, you can see that:
The server is located at
grpc.server_uri=dns:///127.0.0.1:10000
Based on the question posed and the amount of data provided, I would check which port the server is really using and which port the client is looking for a connection on.
The easiest way to do this is to use the built-in Resource Monitor application. On the Network tab, in the TCP Connections list, you can find the application and the port it uses.
You can also use the PowerShell command
Test-NetConnection -Port 10000 -InformationLevel "Detailed"
Test-NetConnection -Port 20000 -InformationLevel "Detailed"
At least this is the first thing I would check based on what you described.
Regarding your question about the changes in Windows 11, I do not think that this is something that's causing problems for you. However, Windows 11 has additional security features compared to Windows 10, try disabling the security features completely as a test. Perhaps this will help solve the problem.
As for ASP.NET Core 6.0 itself (if I understood the version correctly), then there is a possibility that the server part, working not in the sandbox of the programming environment, still does not accept the client certificate. At the program level, you can try to fix this by adding the following exception to the code:
// This switch must be set before creating the GrpcChannel/HttpClient.
AppContext.SetSwitch(
"System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
// The port number(5000) must match the port of the gRPC server.
var channel = GrpcChannel.ForAddress("http://localhost:5000");
var client = new Greet.GreeterClient(channel);
More troubleshooting issues with ASP.NET Core 6.0 Microsoft described in detail here.
https://learn.microsoft.com/en-us/aspnet/core/grpc/troubleshoot?view=aspnetcore-6.0
I hope it was useful and at least one of the solutions I suggested will help solve your problem. In any case, if I had more information, I think I could help you more accurately.
I am debugging a very odd problem I noticed when testing in my own network. (It doesn't appear to happen on other servers outside my network) where random http requests fail.
I have tried it on two apache servers on my network, and it happens about 1 out of every 100 requests with 2 different apache servers.
I thought it was front-end related, but it appears something to do with my internal network OR configuration. I installed Charles proxy on my machine and used my phone to make the requests in the appellation. (Ajax)
The ajax/http request is being made but it is never making it into my access logs and I am getting the error "Remote Server closed the connection before sending the response header"
How can I debug this further?
NOTE: Also worth noting I can only reproduce problem on mobile iPhone + iPad devices when connecting to the server on the machine. (EVEN when using http proxy...which is very odd)
EDIT:
I did a wireshark capture for port 80 from the server computer and accessing it from iPhone. I am having a hard time interpreting it.
Here is the link to the capture files:
CAPTURE 1 (iOS 8 iPhone):
https://www.dropbox.com/s/1ipruv3wlmgng5o/http%20capture%20bad.pcapng?dl=0
NOTE: The error happens after the LAST post to sales/add_payment
CAPTURE 2 (iOS 8 iPhone):
https://www.dropbox.com/s/4zu3654uh9l6230/http%20capture%20bad%202.pcapng?dl=0
NOTE: The error happens after the LAST post to sales/complete
CAPTURE 3 (Android 4.4):
https://www.dropbox.com/s/8xtwkbewce02psw/http%20capture%20android%201.pcapng?dl=0
FOLLOW UP:
If it is indeed faulty network equipment, how do I determine what is bad? (device, router, modem?)
My company is currently using TortoiseSVN 1.6.16 32-bit on Windows XP to connect via HTTPS to a VisualSVN-Server 2.1.19 running on a Windows Server 2003 residing in the same network (no proxy). We use a self-signed certificate and Kerberos authentication using windows credentials (I suppose this is a VisualSVN-specific feature). In this setup, everything works dandy.
When my company decided to move on to Windows 7, we tried TortoiseSVN 1.7.6 64-bit on Windows 7 64-bit which resulted in the following problem:
Any operation involving the server (repo-browser, checkout, update, checkin, ...) is unbearably slow e.g.
opening the repo-browser (10 projects): 15 min
update on a fresh checkout of 50 files: 1 min
checkin of a single empty file: 30 sec
Tortoise shows alternatively normal transmission speeds and 0 byte/s. Many small files seem to be slower than a few big ones.
The slow connection results in various failures when using neon as http-lib (serf is still slow, but operation finishes successfully without errors)
EasySVN, SmartSVN and the SVN command line client that comes with TortoiseSVN show the same behaviour. Same with TortoiseSVN 1.6.16 64-bit.
Changing the server protocol to HTTP (no SSL) does not improve the situation
On the other hand
TortoiseSVN 1.7.6 32-bit on Windows XP works fine with our server
Access via browser/WebDAV works well even under Windows 7
Server side logs do not show errors or even warnings
I found several posts which also complained about slow behaviour on Windows 7, but they didn't fit my bill because they were local operations or were restricted to TortoiseSVN.
As there is no indication that there is a general problem with Subversion on Windows 7, I suspect that it could be our OS' networking parameters or protocol versions. Are there any parameters which are known to influence Subversion's performance?
I have to admit I am not familiar with how exactly Subversion (or rather neon/serf) relies on the OS and on which parts. Any information on that would be greatly appreciated.
Are there any parameters in the subversion 'servers' file which I should test? How would you consider my chances that Wireshark'ing the connection will help me?
Similar experiences, opinions, hints, help and straws are welcome.
Wireshark shows sporadic gaps of ca. 5 sec in the TCP stream apparently caused by VisualSVN Server.
https: the server acknowledges the client hello then waits for 5 secs before sending its server hello
https: the server acknowledges the client key and than takes 5 secs before supplying its encrypted handshake data
https: even outside the handshake, server sometimes sends an ACK (on TCP level) and then waits for 5 sec before sending something back to the client (the data is encrypted so it's hard to tell whether the break occurs at some point of interest)
http: at both server side transmissions during the NTLM authentication
http: before server sending a FIN flag
A typical fail with Windows 7 against an older server is IPv6 networking.
If your machine does not have an SVN server listening on an IPv6 address Windows 7 might still try to do a TCP6 connect first (you can see it in Process Explorer if you look at the open sockets of the TortoiseSVN process while trying an operation), this has a timeout of a few seconds and then retries with IPv4.
Simple solutions are either upgrade your server to an IPv6 capable one or disable IPv6 for the Windows 7 clients.
Another thing you could verify (the answer above didn't work for us) is the Internet Explorer settings especially if you have IE9. We found that by disabling the option Automatically detect settings in the Internet Options -> Connection tab -> LAN settings, SVN started working normally again.
The issue was never properly cleared up. Most probably, the company internal network path between the client and the server was somehow at fault. The matter became obsolete when we moved the SVN server to another machine. The very same setup of server and clients works fine now, even with Windows 7.
I had the same symptom of a very slow repository browse, slow updates, slow everything.
My SVN server has two Ethernet cards, so it has two Ethernet IP addresses. The SVN server was only listening on one of the IP addresses. So a name resolution via WINS or NetBIOS could resolve to the 'wrong' IP address.
TortoiseSVN would retry, eventually the name resolution would find the 'correct' IP address, and things would work.
Is there a limit to the number of HTTP ports in a machine. I have a windows application that uses .NET Remoting. Each instance of the application, exposes a Remote object on load, through a HTTP Channel with port 0 (so that port can be decided dynamically). In a Multi user environment, will there be a limit to the number of HTTP Ports.
Thanks in Advance!
Yes there will be a limit to the number of ports available which is 65535 minus the number of ports already in use for existing services (for example, SMTP [25], HTTPS [443], SQL Server [1433], etc).
So on a typical Windows server, a finger in the air calculation would be 65535 - 1024 (the well know service ports <= 1024 which are considered out of bounds) - another 10-20 or so possible other application (SQL Server, MySQL, Oracle, etc). This would leave around 64490 post available.
However will you really be running 64000 of instances of your server?