I have the strangest problem with ftp. Immediately after a reboot the windows command line ftp program works fine.
Then, I try to use a friendlier ftp client (TotalCommander, which I have used for years) and it can log in, get folder contents, everything works until I try to upload a file. Then it seems that the connection gets suddenly closed -- and ftp ability is thereafter damaged.
Here is the strange part: ftp upload seems to then be permanently broken. And the way it is broken is quite strange. I go back to the command line ftp client, and I can connect, list folders, change folders, download files of any size, upload small files <2Kbytes, BUT upload of a file bigger than 2K causes the connection to be dropped, and the command times out.
Reboot the OS (64 bit Win 7) and everything is fine again.
I have disabled the windows firewall since many discussions centered on this. The behavior is the same whether windows firewall is on or off.
I have tried toggling PASV mode, and again, the behavior is the same whether sending passive or not.
I have tried in both binary and ASCII mode, and the behavior is the same.
This problem began when I reinstalled the Windows 7 Enterprise, Service Pack 1 on a new disk. Installed all 146+ patches and updates.
Network is not a problem: no problems with my laptop connected to the same network. No problems with the remote server when accessed from other machines. It is clearly isolated to this new installation of windows.
Any ideas at all how running one program could cause the command line ftp to become unable to send files if they are >2k?
The problem has magically gone away, and this question has won me the time honored "Tumbleweed" badge.
While it is inconclusive, the problem appeared to be something with the network driver. The application was making a request that somehow put the driver in a mode which would limit sending (only for ftp as far as I know) to less than 2K. I reinstalled the latest network driver and latest chipset drivers, rebooted a number of times, to no effect. However, I installed the HP Printer driver, and poof, the problem went away. This might simply be a coincidence, or there might be some OS utility that got updated as a result. If anyone has clues as to why installing HP printer software would fix a network card, please share.
Related
I'm having a very strange problem with an application in windows 10. It consists of several .exe in the same computer communicating between them with sockets using system.net.sockets library.
The problem I have is that after installing Windows 10 in a new computer, install all windows updates and then installing that application, connection to sockets doesn't work correctly and the application fails. The strangest thing is that if you leave the computer alone for 1-2 days the applications starts working just fine. The same has happened after installing version 1803 update, it stops working and then works one or two days later.
Any idea of what can it be? Has anyone seen something similar?
It really seems to be related to the 1803 update you mentioned.
Symptoms:
Running an application from a network share will fail when creating a socket;
Copying the very same application to a local drive/path will work just fine, without any further modification.
We are also struggling with this while connecting to an Oracle database (both ODBC and ODP.NET) and it seems the issue has recently been acknowledged:
https://support.oracle.com/knowledge/Oracle%20Database%20Products/2399465_1.html
It also seems this is a recurrent Windows bug:
Win Socket Creation fails with Error code 10022 if non super user
https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/3076a9cd-57a0-418d-8de1-07adc3b486bb/socket-fails-with-error-10022-when-application-is-run-from-certain-network-shares-on-vista-and?forum=wsk
Sorry, no effective solution at the time (other than copying the app binaries to a local folder). I'll update this answer once we get a better solution.
OK, looking a little further I found here in SO that this might be related to a SMBv1 network share, which describes the environment we had here (the network share was disabled because of another bug we faced - thanks MSFT).
Re-enabling SMBv2 / SMBv3 on the server solved the issue.
Related post:
After Windows 10 update 1803 my program can't open a socket when running from network share
I downloaded the latest version of Appium from the GitHub. I have installed it on two Windows PCs,
On first one it works fine.
But on the second one, just by clicking "Start server v1.7.2" a window appear showing "The server is stopped".
What is wrong with the configuration?
I found the solution for above error.
It was due to firewall modified by an antivirus, it disables all open ports on this PC.
There can be around three solutions:
Format the PC will help us to get open ports, but it is not a recommended solution.
Uninstalling Antivirus can open blocked ports, by recovering firewall to its original state.
If the 2nd solution does not help then Uninstalling Antivirus and create a new User Account on PC, helps me to run Server successfully.
We've successfully set-up WDS (Windows Deployment Services) and had it all working (it's serving an unattended Windows 7 x64 installation, the user only has to F12 then wait for the install to finish) but it no longer works the way it did before.
We're trying to F12 the exact same machine where it used to work. The WDS part of the installation is still automatic (unattended) but ImageUnattend.xml does not seem to run on the client at all now, it gets stuck at the language selection (everything after that is manual as well which is supposed to be automatic).
Inspecting C:\windows\panther on the client machine shows that WDS pops up with an error: WDS CallBack_WdsClient_CopyPrivatesDone: Failed to process client unattend variables.
Changing "%MACHINENAME%" to "*" in the ImageUnattend.xml file makes it all automatic again, however it then renames the computer incorrectly.
The variable %MACHINENAME% worked before, so why does it not work now? Has anyone else met this issue before?
Using a different user (domain administrator) in the ImageUnattend.xml file does not seem to change anything.
After countless of attempts with at least 15 new ImageUnattend.xml files, I decided to restart the server and use the original files I knew worked before.
This fixed it.
I can't start applications from a network share or drive. An error Appears saying that the application was unable to start 0xc0000006. If I copy the .exe on my desktop it works fine.
I tried to start Windows in safe mode and it works too.
My machine run on an HP laptop core i5 with Windows 7 SP1.
Any idea?
EDIT:
I found my problem: It's a bug that append sometimes with Kaspersky endpoint Security v.10. I just uninstall this version and install an older version (v.8). I hate Kaspersky...
Hope it will help someone!
0xc0000006 is an NTSTATUS code. Specifically it is STATUS_IN_PAGE_ERROR.
It is not uncommon to see these errors when you attempt to run an executable from a network volume. For whatever reason, if there is any even intermittent problem accessing the network volume, then you may see this error. When a module is loaded, the code is not physically loaded until it is needed. A memory mapped file is created, and when a particular page is needed, it is brought into physical memory on demand. If your network fails to meet this demand, your application stops with STATUS_IN_PAGE_ERROR.
The common ways to deal with this include:
Getting a more robust connection to your network volumes.
Copying the executable file to a local drive and running it from there.
Adding the IMAGE_FILE_NET_RUN_FROM_SWAP flag to your PE file options.
Thank you for your replies.
I solved the problem by uninstalling Kaspersky end point 10.
My colleges have the version 10 of kasperky and it works but not for me.
I will install an older version waiting for kaspersky v 11.
Having a weird issue. I'm new to Macs and have a windows VM that I'm running on a new macbook pro via VM Fusion. I setup a file share on the windows side (Win 7) and accessed it from the Mac side using the "Connect to Server" dialog. I did it successfully several times, even adding in a symlink on the mac side and starting a git repository. About halfway through my first pull from my git server the pull froze (i.e. didn't continue pulling). I waited for quite a while like that before killing the terminal window, and after that I was no longer able to connect to the share in any fashion. I've tried removing the share on both sides, rebooting both sides, but ever since then trying to connect back to that VM gives me an error that "There was an error connecting tot he server {ip address}. Check the server name or IP Address, and then try again"
IP is right, I've tried it with the name as well (which is how I did it originally) which was also right; I can ping from the mac side to the windows side both the IP and name. I have tried editing /etc/hosts to point a name at the IP address that way, same result. I've tried turning off the windows firewall and antivirus, no difference.
I guess I'd assume it was me not doing something right on the shares, except that it went from working to not working w/o me changing any settings. It's a new box, so it's possible that there was an OS patch (on either side) that caused the change, but I didn't notice any going in during the time in question.
UPDATE: I pulled another new mac patch down (I guess I didn't have them all) and it worked again. Right up until I froze it again with the git issue (I had tried to resolve the pack size issue that appeared to be the root cause of the git problem, I was wrong).
Is there any process I should look at killing and restarting? This behavior seems like there's a hung process somewhere, though shutting the mac down and booting it up again isn't helping, so I don't know.
Figured it out. Turns out that turning off sharing on the windows sid,e then turning it back on solved it.