We are attempting to connect to a WebDAV server using net use over SSL. On some servers we're seeing an issue in which this connection only succeeds if we specify port 443 in the URL.
Does Map
net use * "https://example.com:443/folder"
net use * "\\example.com#SSL#443\folder"
and, bizarrely, so does this:
net use * "\\example.com#SSLasdf\folder"
Does Not Map
net use * "https://example.com/folder"
net use * "\\example.com#SSL\folder"
In the non-working cases we consistently receive the following error:
System error 67 has occured.
The network name cannot be found.
We have noticed some things that might be useful information:
We have a test server that's configured the same way as the prod server and it works as expected.
In the non-working cases, no incoming requests are ever seen at the prod server from the failing host.
All clients are based on the same image.
The problem does not manifest uniformly on all clients -- some work, some don't.
There is an existing, valid entry for example.com in the client DNS cache.
Flushing the client DNS cache of the affected servers does not resolve the problem.
Once the problem appears, it seems to stick. That is, if I execute one of the working mappings, delete it, and then immediately execute one of the non-working mappings, the problem persists.
We are utterly stumped. Any theories?
You are seeing different behaviors because you are connecting using different names. Once a name has been attempted and failed, the WebClient (this is the service that enables WebDAV) will cache the response for a period. To clear the cache, locate the WebClient service in the Services console and restart it. Or from an administrative command prompt execute the following command:
net.exe stop webclient && net.exe start webclient
We ultimately determined that we were mis-interpreting the System Error 67 that net use was returning. We discovered two interesting things:
In the event that the WebDAV returns a 404 or a 50x on the initial, root folder PROPFIND, net use will (rightly) interpret this as the root folder being unavailable. The fact that it says the network name could not be found let us to believe that the problem was with the name resolution, but it was really just saying, 'hey, I couldn't find anything at this path.'
If 'net use' fails due to a 404/50x, it appears that for a brief period of time it will automatically fail any additional mappings for that same host without issuing a request. For example, if net use http://me.com/foo returns a 404, then net use http://me.com/bar will instantly fail if made in rapid succession to that first call, and no request record will be seen in the WebDAV server logs.
My best guess is that appending the #443 port didn't make any real difference. What it perhaps did do was to trick net use into thinking it was talking to a different host, at least for the purposes of its 'auto-fail' feature. But that's just a guess.
Related
I'm trying to get my head around SNMP for a project I'm working on. After I failed miserably getting it to work in my company's network, I set up a simple 3-device network to test things on, consisting of two Windows 10 PCs and a manageable switch between them.
I installed the optional feature "SNMP" on both PCs, made sure the service is running correctly and configured both services to accept SNMP queries from each other. I made sure to open up UDP port 161 in both PCs firewalls. Then I got the Net-SNMP binaries in order to use SNMPGET and SNMPWALK. As an alternative, I set up the SNMP extension for PHP through xampp (since I want to use PHP in my project once I get SNMP to work). Finally, I installed wireshark to monitor what exactly is going on and this is what I found:
When I try SNMPGET or SNMPWALK either through cmd or as a PHP command, I always get a timeout message. Wireshark is showing the get-next-request leaving one PC and arriving correctly on the other, so the network connection itself is working fine. But the receiving PC never sends a response. As I said, I'm pretty new to SNMP and I'm at a loss as to why this is happening. As I understand it, the optional feature for Windows 10 comes with its own SNMP agent, correct? If so, what could cause it to simply ignore an incoming request from a valid source IP?
The funny thing is that this even happens when I try to send an SNMP query to 127.0.0.1. I have no idea what I'm doing wrong...
Thanks to the comment of Lex Li, I was able to finally figure out which step I made a mistake with:
When setting up the SNMP service, under the security tab, I had to add 'public' as an accepted community name (with READ-ONLY rights). I figured since 'public' is sort of the standard read-only community, it would be accepted by default, which apparently it is not.
Alternatively, I guess I could have added my own communtiy name, but I didn't try that since I only want to read some values through SNMP anyways and read-only access is all I need for that.
Thank you very much Lex Li, I'm off to continue my project now!
Trying to contact vsftpd using lftp with FTPS (FTP over SSL). However I keep getting an error like this:
gnutls_record_recv: A record packet with illegal version was received
What is the solution?
This error is misleading. In reality, any number of server-side errors or problems will yield an error like this, and vsftpd does not do very good logging of errors that occur.
For example, in one case I was able to identify that vsftpd was trying to chroot into a directory that did not exist for the user I was logging in with - once I created the directory the error went away.
In another, a PAM script was misfiring resulting in the same error from lftp.
In other words, the error implies some kind of problem occurred on the server that it was not able to handle gracefully, and it just terminates the connection, resulting in this error. You need to go through the config of vsftpd to figure out what is going on - start by switching stuff off, such as PAM scripts or chroot settings, all the way to the base configuration until you hit the point where it starts working.
The main point though is the error does not have a single, specific cause but obscured what could be any one of a number of vsftpd configuration problems.
I've literally searched the internet for the last 5 hours and I have tried every suggestion out there and I'm starting to wonder if what I want to do is simply not possible....
Most webservers only allow X simultaneous connections for uploading/downloading. I simply want to upload my many files faster, by connecting/uploading through various proxies. However, no program I can find has anything for automatic proxy configuration, and only for a specific proxy IP. I have an account with a proxy service that gives you a different IP address for every request/connection made through it. I can connect to this fine from any FTP program but it appears that the servers are confused when they see different IP's connecting, and there's no way to manually whitelist/authenticate them on the server side, so it simply closes all connections. I even have a list of IP addresses with port/user/pass that I am willing to use, but I can't figure out how to do anything other than use a specific proxy to upload/download from servers.... Is this even possible????
ANY HELP/INPUT IS GREATLY APPRECIATED!!
I am working on a Windows (Microsoft Visual C++ 2005) application that uses several processes
running on different hosts in an intranet.
Processes communicate with each other using TCP/IP. Different processes can be on the
same host or on different hosts (i.e. the communication can be both within the same
host or between different hosts).
We have currently a bug that appears irregularly. The communication seems to work
for a while, then it stops working. Then it works again for some time.
When the communication does not work, we get an error (apparently while a process
was trying to send data). The call looks like this:
send(socket, (char *) data, (int) data_size, 0);
By inspecting the error code we get from
WSAGetLastError()
we see that it is an error 10054. Here is what I found in the Microsoft documentation
(see here):
WSAECONNRESET
10054
Connection reset by peer.
An existing connection was forcibly closed by the remote host. This normally
results if the peer application on the remote host is suddenly stopped, the
host is rebooted, the host or remote network interface is disabled, or the
remote host uses a hard close (see setsockopt for more information on the
SO_LINGER option on the remote socket). This error may also result if a
connection was broken due to keep-alive activity detecting a failure while
one or more operations are in progress. Operations that were in progress
fail with WSAENETRESET. Subsequent operations fail with WSAECONNRESET.
So, as far as I understand, the connection was interrupted by the receiving process.
In some cases this error is (AFAIK) correct: one process has terminated and
is therefore not reachable. In other cases both the sender and receiver are running
and logging activity, but they cannot communicate due to the above error (the error
is reported in the logs).
My questions.
What does the SO_LINGER option mean?
What is a keep-alive activity and how can it break a connection?
How is it possible to avoid this problem or recover from it?
Regarding the last question. The first solution we tried (actually, it is rather a
workaround) was resending the message when the error occurs. Unfortunately, the
same error occurs over and over again for a while (a few minutes). So this is not
a solution.
At the moment we do not understand if we have a software problem or a configuration
issue: maybe we should check something in the windows registry?
One hypothesis was that the OS runs out of ephemeral ports (in case connections are
closed but ports are not released because of TcpTimedWaitDelay), but by analyzing
this issue we think that there should be plenty of them: the problem occurs even
if messages are not sent too frequently between processes. However, we still are not
100% sure that we can exclude this: can ephemeral ports get lost in some way (???)
Another detail that might help is that sending and receiving occurs in each process
concurrently in separate threads: are there any shared data structures in the
TCP/IP libraries that might get corrupted?
What is also very strange is that the problem occurs irregularly: communication works
OK for a few minutes, then it does not work for a few minutes, then it works again.
Thank you for any ideas and suggestions.
EDIT
Thanks for the hints confirming that the only possible explanation was a connection closed error. By further analysis of the problem, we found out that the server-side process of the connection had crashed / had been terminated and had been restarted. So there was a new server process running and listening on the correct port, but the client had not detected this and was still trying to use the old connection. We now have a mechanism to detect such situations and reset the connection on the client side.
That error means that the connection was closed by the
remote site. So you cannot do anything on your programm except to accept that the connection is broken.
I was facing this problem for some days recently and found out that Adobe Acrobat Reader update was the culprit. As soon as you completely uninstall Adobe from the system everything returns back to normal.
I spent a long time debugging a 10054/10053 error in s3 pre-signed uploads
Turns out that the s3 server will reject pre-signed s3 uploads for the first 15 minutes of it's life.
So - If you're debugging s3 check it's not a new bucket.
If you're debugging something else - this is most likely a problem on the server side not client side.
stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options.
The environment:
*Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times.
* The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;)
The Symptom:
At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run.
Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with.
Which is why I am asking here. Ayneone an ide what ELSE could cause this?
It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause?
Thanks
"Address already in use", aka WSAEADDRINUSE (10048), means that when the client socket prepared to connect to the server socket, it first tried to bind itself to a specific local IP/Port pair that was already in use by another socket, either an active one or one that has been closed but is still in the FD_WAIT state. This has nothing to do with the number of ports that are available.
I'm having the same issue on a Windows 2000 Server with a .Net application connecting to a SQL Server 7.0. There's like 10 servers with the same configuration and only one is showing this error several times a day. With a small test program I'm able to reproduce the error by just establishing a TCP connection on the SQL Server listening port. Running CurrPorts (http://www.nirsoft.net/utils/cports.html) shows there's still plenty of available ports in range 1024-5000.
I'm out of ideas and would like to know if you've found a solution since you've posted your question.
Edit : I finally found the solution : a worm was present on the server (WORM_DOWNAD.A) and exhausted local ports without being noticed.