In our particular scenario we have an OSX NFS server which accepts incoming video clips from an encoding device.
The encoding device starts "stream-writing" the clip to the NFS share and the server picks up the newly copied file and starts polling the file as it grows, waiting for it to finish uploading from the NFS client.
What I need to know is the exact time the NFS client has finished uploading the file, our current method of polling for the file size has a safety threshold of 10 seconds before it considers the file uploaded. This happens because the NFS copy can have delays in transmission due to network or other reasons.
lsof will not work for the NFS mount, neither will nfsstat or any other command line tool i've looked at...
So is there a way to reliably and timely know that the client has finished copying a file to the OSX NFS Server mount?
Related
I have server-to-client communication taking place in my application that utilizes UDP multicasting. A necessary "tweak" that I have had to make in this setup is to increase the receive buffer size on the client.
In Windows, I have been achieving this by modifying the registry key on the client (receiver):
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Afd\Parameters]
DefaultReceiveWindow = (32-bit DWORD of desired size)
This has worked very well, greatly reducing/minimizing UDP datagram loss.
Now, I am attempting to install a client application and do the same on a Windows VM (guest), that's running on a Windows host. Due to lack of permissions so far, I have only been able to modify this registry setting on the inside/guest OS at this point. And it does not seem to be working as I'm used to - I still encounter a lot of datagram loss, as if this change did nothing.
Is it safe to assume that this receive buffer size change is needed to be made on both the guest and the host OS, for the intended beneficial effect to occur? It seems like this would be the case, but - understandably - I am receiving some organizational resistance to performing such a change on the host, as there are other (unrelated) guest OS's that could also be impacted.
I am sending a set of .bin files via TFTP from Windows server to Linux client. When I send files < 50KB, the file is being sent successfully but any larger files is not being sent.
I use Python socket.py module to send and receive files and acknowledgments respectively.
I am thinking in the following directions :
(1) MTU - buffer size - (currently changed to 9000)
(2) Firewall preventing larger files ?
(3) Duplex settings mismatch - (currently set to 100 Mbps FULL Duplex - does not work under autonegotiation)
(4) Any configurations specific to Windows (the same file is sent successfully from Linux machine tftp server)
What could be the possible problems? Please help me narrow down the scope of the issue.
When I try to unmap network drive letter mappings using WNetCancelConnection2 (or the depreciated WNetCancelConnection), the thread will block for about 10 seconds before the drive letter is actually unmapped if the file server is unavailable on the network. Is there a faster way to unmap this drive letter if the file server is not (and will not be) available?
Set the third parameter fForce passed to WNetCancelConnection2 to true. This should immediately terminate the connection. Please be aware that if there are open files or jobs on the connection you might loose data. So it might be a good idea to use a PING or some other means to establish whether the remote computer in online or not.
i have a big problem. I need tranfer a lot of files by a server to another server, but the second server isnt a local server. If i tranfer by a local server i cant 100mbs but if i send for another server out the speed is 2mbs. my network is 1gbs. I use a command line 7z.
If your servers are (as you wrote) on the same network and connected through the same line you are most likely to have a network connection problem.
I've often seen that the duplex settings of network cards are not set up correctly which leads to a lot of collisions.
Check your network card settings and try to force for example 100mbps full duplex.
I work for a company where this happens daily when trying to connect IBM network cards with Cisco switches. Have a look here how to set up duplex settings: https://superuser.com/questions/86581/how-do-you-check-the-current-duplex-value-of-a-network-card-set-to-auto-negotiat.
If this doesn´t help you might be better off asking at superuser.com
When trying to receive a (large, approx. 100MB) file using an FTP adapter in BizTalk 2006, we run into the following problem, which causes the file to be processed over and over again.
Retrieving the file succeeds; it is placed into the MessageBox and processed properly
When the FTP adapter issues the DELE statement, it never reaches the FTP server the file is on (we have verified this by taking a look at the FTP server's logs)
there are no signs of timeouts on the FTP server; the FTP server log does not mention a timeout occurring
After the interval time set on the adapter expires, the FTP server will still find the large file that we have already processed in the previous run, because the DELE statement failed
The event log in BizTalk states that ‘The connection to the FTP server was broken prematurely’. That is why we think there is a timeout issue.
We have seen that retrieval of the file takes around 35 minutes. The FTP server timeout is set to 1 hour. no problems there I guess.
Then we found the following article: http://www.ncftp.com/ncftpd/doc/misc/ftp_and_firewalls.html#FirewallTimeouts. It states that a firewall / routing device might be responsible for the timeouts. The team managing our firewalls and routers told us that there were no timeouts set here.
Which leaves us in the dark on the cause of our problem. Does anyone of you have any suggestions? Or even better, the solution!!
Have you tried the solutions in this article?
I avoid using the FTP adapter. Instead I use a third party utility to retrieve files and move the transferred file to a file adapter receive location. Third party utilities allow you to configure rules, recovery actions etc, freeing BizTalk from having to manage the transfer.