I have a (Spring Boot) server, and a client trying to establish a Websockets connection to the server.
I've run and tested the server on a Linux machine (Ubuntu 20.04), it works fine.
It also ran fine on my Windows (Windows 10 Home) machine up until a few days ago. Now it is acting strange.
I checked the network traffic between client and server in wireshark, both in Linux and Windows.
Here is the linux capture:
And this is the windows capture:
The blacked out IPs are the client's. Both the linux and windows servers are running in the same network, so the problem would not be in a router configuration.
In both cases, the client makes the same request to /location/websocket, but in Linux the server responds successfully in less than 1 second, while in Windows it responds about 13 seconds later, and immediately follows the response by closing the websocket connection.
What looks strange to me are the NBSTAT name queries. I tried several times and there are always three queries between the arrival of the client request, and the closing of the websocket connection.
So maybe the windows machine needs to do a name query to respond successfully? Is this normal? What does the <00>...<00> string in the name query mean? I checked the network traffic while keeping the server up but not connecting to the client, and I didn't see any activity on port 137, so it definitely only happens when the client tries to contact the windows machine. What can I do about this? How can I get the server responding to websocket connections again?
Related
If I start an SSH connection with my windows 10 laptop, it gets aborted within a minute. Even when I'm actively using the connection.
I tried multiple servers (Ubuntu 18, 16 and ESXi 6.7) all with the same problem, also tried to use different clients (putty and mobaXterm).
Did a packet capture and it does look like the connected server sends a RST with ACK to my laptop. After which my laptop responds FIN with ACK.
If I setup the same connection from my phone with JuiceSSH it keeps working normally. That's why I suspect my laptop, but I have no idea how to resolve it.
Any ideas?
In putty feel free to try with:
In your session properties, go to Connection and under Sending of null packets
to keep session active, set Seconds between keepalives (0 to turn off) to e.g.
300 (5 minutes).
source: https://patrickmn.com/aside/how-to-keep-alive-ssh-sessions/
We have to migrate a System with our software from a Windows Server 2003 to a Windows Server 2012 R2. At this project we just changed the server hardware (to a HP ProLiant Server), the OS and the ISDN card with the CAPI driver. On this server there is a C++ application which filters a 30 byte character string out of the ISDN D-Channel and send it over a TCP socket (localhost, port 30000) to a JAVA application. The message comes every 30 seconds and has always the same format.
The problem is: Every 6 minutes the TCP socket is getting deleted/cleared/doesn't work. Both applications log the broken communication in their log files, build and open the socket again and the game goes on without any problems for another 6 minutes.
At the old system, this software works for years without any problems on Windows Server 2003 on 9 sites.
What we've already done without any positive effect:
deactivate the firewall completely
change the port to different ones (30001, 30500, 16000, 997)
use the own IP (10.16.58.30) instead on localhost
put several timeouts to the TCP Parameters at the registry (e.g. KeepAliveTime)
update JAVA to the last version
install all recommended updates for Windows Server 2012 R2
strip both applications down to just the socket to ensure that the software itself hasn’t any problem
using a standard message ('1234567891234567890') instead of the incoming ISDN message to exclude malfunction from strange input data
checking the message length to exclude a length of 0
checking all buffers on both sides to exclude buffer overflow
checking if any other software on the server is using 'our' port
The problem doesn't appear, if we're sending the messages from outside manual or in different message cycles with a bash script to the server port of the JAVA app.
We are now thinking that this can only be some kind of checking mechanism of the operating system that forces our socket to stop communication. Any suggestions, what that could be?
I'm running a websocket server (command line program) off port 9000 on a Windows 2008 server. I can't seem to figure out why it will not accept more than about 600 concurrent connections. Testing on my local machine, I can create thousands of concurrent connections. But on the server, I get the following error after about 600:
No connection could be made because the target machine actively refused it
I have tried adjusting registry entries for the max port number, and turning off the firewall to no avail. I have also tried a different websocket server implementation. Is there some other setting I need to change?
edit: I tried this on a Linux server as well with the same problem.
I found the problem:
It seems to be my client side internet connection. By running the same tests on a different network from the client side, I can create thousands of connections.
My company is currently using TortoiseSVN 1.6.16 32-bit on Windows XP to connect via HTTPS to a VisualSVN-Server 2.1.19 running on a Windows Server 2003 residing in the same network (no proxy). We use a self-signed certificate and Kerberos authentication using windows credentials (I suppose this is a VisualSVN-specific feature). In this setup, everything works dandy.
When my company decided to move on to Windows 7, we tried TortoiseSVN 1.7.6 64-bit on Windows 7 64-bit which resulted in the following problem:
Any operation involving the server (repo-browser, checkout, update, checkin, ...) is unbearably slow e.g.
opening the repo-browser (10 projects): 15 min
update on a fresh checkout of 50 files: 1 min
checkin of a single empty file: 30 sec
Tortoise shows alternatively normal transmission speeds and 0 byte/s. Many small files seem to be slower than a few big ones.
The slow connection results in various failures when using neon as http-lib (serf is still slow, but operation finishes successfully without errors)
EasySVN, SmartSVN and the SVN command line client that comes with TortoiseSVN show the same behaviour. Same with TortoiseSVN 1.6.16 64-bit.
Changing the server protocol to HTTP (no SSL) does not improve the situation
On the other hand
TortoiseSVN 1.7.6 32-bit on Windows XP works fine with our server
Access via browser/WebDAV works well even under Windows 7
Server side logs do not show errors or even warnings
I found several posts which also complained about slow behaviour on Windows 7, but they didn't fit my bill because they were local operations or were restricted to TortoiseSVN.
As there is no indication that there is a general problem with Subversion on Windows 7, I suspect that it could be our OS' networking parameters or protocol versions. Are there any parameters which are known to influence Subversion's performance?
I have to admit I am not familiar with how exactly Subversion (or rather neon/serf) relies on the OS and on which parts. Any information on that would be greatly appreciated.
Are there any parameters in the subversion 'servers' file which I should test? How would you consider my chances that Wireshark'ing the connection will help me?
Similar experiences, opinions, hints, help and straws are welcome.
Wireshark shows sporadic gaps of ca. 5 sec in the TCP stream apparently caused by VisualSVN Server.
https: the server acknowledges the client hello then waits for 5 secs before sending its server hello
https: the server acknowledges the client key and than takes 5 secs before supplying its encrypted handshake data
https: even outside the handshake, server sometimes sends an ACK (on TCP level) and then waits for 5 sec before sending something back to the client (the data is encrypted so it's hard to tell whether the break occurs at some point of interest)
http: at both server side transmissions during the NTLM authentication
http: before server sending a FIN flag
A typical fail with Windows 7 against an older server is IPv6 networking.
If your machine does not have an SVN server listening on an IPv6 address Windows 7 might still try to do a TCP6 connect first (you can see it in Process Explorer if you look at the open sockets of the TortoiseSVN process while trying an operation), this has a timeout of a few seconds and then retries with IPv4.
Simple solutions are either upgrade your server to an IPv6 capable one or disable IPv6 for the Windows 7 clients.
Another thing you could verify (the answer above didn't work for us) is the Internet Explorer settings especially if you have IE9. We found that by disabling the option Automatically detect settings in the Internet Options -> Connection tab -> LAN settings, SVN started working normally again.
The issue was never properly cleared up. Most probably, the company internal network path between the client and the server was somehow at fault. The matter became obsolete when we moved the SVN server to another machine. The very same setup of server and clients works fine now, even with Windows 7.
I had the same symptom of a very slow repository browse, slow updates, slow everything.
My SVN server has two Ethernet cards, so it has two Ethernet IP addresses. The SVN server was only listening on one of the IP addresses. So a name resolution via WINS or NetBIOS could resolve to the 'wrong' IP address.
TortoiseSVN would retry, eventually the name resolution would find the 'correct' IP address, and things would work.
First of all, sorry for my bad English : )
My Java application (multiplayer game server) uses this package to communicate with a web application in client's browser using websockets: https://github.com/TooTallNate/Java-WebSocket
I've encountered a problem running my application: only I can connect to the websocket server, clients on other hosts can't do so. In browser I estabilish connection as usual, address here is certainly correct:
new WebSocket("ws://"+serverIp+":8787");
When I connect from my own host to the websocket server running on the same host, it runs perfectly. When other hosts try to connect to me, connection in not being estabilished: in browser WebSocket objects's .readyState is 0 (whilst it should be 1), and even server does not recieve any handshakes (no output from onClientOpen in server console, I even tried to get any output from certain WebSocketServer class' methods).
Other hosts are still recieving, for example, static contents of web application from webserver on 80 port on the same host. Problem is not the closed 8787 port: I checked it, it's open.
What may be the reason that other host can't connect to my websocket server?
WebSockets uses a cross-origin permission system. You might need to tell you WebSocket server to accept connections from more than just your local host. The verification of Origin happens during the WebSocket handshake which likely happens prior to onclientOpen.