GetFileAttributesEx and network protocols - winapi

I'm using GetFileAttributesEx() to get the file size from a fully qualified file path.
So far, this works fine, but there is a thing that isn't quite clear to me in the documentation: in the Remarks section, they mention that the function is supported by the following network protocols:
Server Message Block (SMB) 3.0 protocol
SMB 3.0 Transparent Failover (TFO)
SMB 3.0 with Scale-out File Shares (SO)
Cluster Shared Volume File System (CsvFS)
Resilient File System (ReFS)
But, it's unclear what happens if I use this function on a drive using a network protocol which is not part of the list above. Will I get some special error code in GetLastError()?
OTOH, basic functions like FindFirstFile() have exactly the same list of protocols, so maybe the question is pointless. Should I just assume GetFileAttributesEx() just works fine if the file actually exists?

Related

How to get the user-set "custom name" of IOUSBDeviceInterface

If I use the IOKit methods to list USB devices, I can get something like "AirPod Case", but I don't know how to get "Francisco's AirPods". I've looked around a the various keys you can ask for, but none I've found bring up these "settable" names, only the standard "product names".
I don't know the answer as a fact, but I can give you some ideas for chasing it down:
The customised name is probably transferred as part of a higher-level protocol, or via vendor specific requests, not via standardised USB device descriptors. There is a small chance it might be advertised via a vendor specific descriptor, but this seems unlikely
I don't own any AirPods, so I don't know what kind of data protocol the AirPod case uses for communicating with a Mac, but you can try to find documentation or source code for that protocol, for example in case anyone has worked out how to use them from Linux and written a tool or library for that.
Finally, you can reverse engineer it yourself, by logging the USB traffic to and from the device when using existing software that is capable of reading the name you are after. On macOS, it's possible to do this using Wireshark. Start logging USB traffic, launch the software that talks to the device, then trawl through the logs to see if you can spot the string, then work out what request caused it to be returned.

How to detect Windows file closures locally and on network drives

I'm working on a Win32 based document management system that employs an automatic check in/check out model. The model it currently uses for tracking documents in use (monitoring the processes of the applications that open the documents) is not particularly robust so I'm researching alternatives.
Check outs are easy as the DocMgt application is responsible for launching the other application (Word, Adobe, Notepad etc) and passing it the document.
It's the automatic check-in requirement that is more difficult. When the user closes the document in Word/Adobe/Notepad ideally the DocMgt system would be automatically notified so it can perform an automatic check in of the updated document.
To complicate things further the document is likely to be stored on a network drive not a local drive.
Anyone got any tips on API calls, techniques or architectures to support this sort of functionality?
I'm not expecting a magic 3 line solution, the research I've done so far leads me to believe that this is far from a trivial problem and will require some significant work to implement. I'm interested in all suggestions whether they're for a full or part solution.
What you describe is a common task. It is perfectly doable, though not without its share of hassle. Here I assume that the files are closed on the computer where your code can run (even if the files are stored on the mounted network share).
There exist two approaches to controlling the files when they are used: the filter and the virtual filesystem.
The filter sits in the middle, between the process and the filesystem (any filesystem, either local, network or fully virtual) and intercepts file requests that go to this filesystem. Here it is required that the filter code is run on the computer, via which the requests are passed (this requirement seems to be met in your scenario).
The virtual filesystem is an endpoint for the requests that come from the applications. When you implement the virtual filesystem, you handle all requests, so you always fully control the lifetime of the files. As the filesystem is virtual, you are free to keep the files anywhere including the real disk (local or network) or even in the cloud.
The benefit of the filter approach is that you can control individual files that reside on the real disks, while the virtual filesystem can be mounted only to the new drive letter or into the empty directory on the NTFS drive, which is not always fisible. At the same time, sitting in the middle, the filter is to some extent more restricted at what it can do, and the files can be altered while the filter is not running. Finally, filters are more complicated and potentially error prone, as they sit in the middle and must play nice with other filters and with endpoints.
I don't have specific recommendations, but if the separate drive letter is an option, I would recommend the virtual filesystem.
Our company developed (and continues to maintain for the new owner) two products, CBFS Filter and CBFS Connect, which let you create a filter and a virtual filesystem respectively, all in the user mode. Those products are used in many software titles, including some Document Management Systems (which is close to what you do). You will find both products on their website.

ElasticSearch replication home/server

I am running a local ElasticSearch server from my own home, but would like access to the content from outside. Since I am on a dynamic IP and besides that do not feel comfortable opening up ports to the outside, I would like to rent a VPS somewhere, setup ElasticSearch and let this server be a read only copy of the one I have at home.
As I understand it, this should be possible - however I have been unsuccessful at creating any usable version that lets another server be a read-only version of my home ES-server.
Can anyone point me to a piece of information or create a guide, that would help me to set this up? I am rather known to ES-usage, however my setup-skills are still vague.
As I understand it, this should be possible
It might be possible with some workarounds, but it's definitely not built for that:
One cluster needs to be in one physical region; mainly because of latency and the stability of the network connection.
There are no read-only versions. You could only allow read access to a node (via a reverse proxy or the security plugin), but that's only a workaround.

Virtual/programmatically generated file on Windows?

I'm looking for a feature similar to CreateNamedPipe on Windows, which would allow programmatically generating file contents on demand. However, it would need to support seek operation as well, so plain named piped will not work, I think. Or does it?
Some details: The file will be read by other existing program, and changing that is not possible in this case. The two specific uses are: 1. the actual data is in a compressed binary blob. 2. the actual data is behind a network connection, accessed with a custom protocol. In both cases, the "virtual" file would give access to date as if it were a local regular file.
I'm sure this would be possible at least by creating a custom file system device driver, or using existing network file system and creating custom server program. But this sounds like very complex (is it?) and not worth the effort.
So, any practical efficient solution, other than just storing the data to regular temp file?
You need to write a kernel device driver, or take advantage of one of the existing user mode device driver frameworks, such as UMDF. You can start reading up on that on Wikipedia.

Data Transfer Speeds: NFS vs HTTP

Am currently considering using REST access to Nirvanix online storage to store/download files. However, Nirvanix also offers NFS access to the network storage.
I was wondering if there are any known benchmarks or protocol-specific reasons for choosing REST over NFS?
Use whatever best fits your environment. Any difference is going to be negligible, especially over non-LAN-speed links where things like CPU usage become irrelevant as they're overwhelmed by the simple fact that the link is already saturated.
One possible exception is dealing with lots of little files. If your use case involves rapid access to a lot of little files, I'd suggest testing both and seeing if one is faster by a large enough margin to matter.
It's a toss-up.
NFS, with the right setup, version, and tuning, is just a tad slower than SMB/CIFS. Older versions, however, can be significantly slower.
What you do gain with NFS is:
primitive file access control (via standard Unix file permissions)
primitive share access control
user mapping
For those platforms that support it, a near-invisibility with regard to operation. It looks just like another subdirectory...
However, if you are not working in a 100% NFS environment, you might find that it's not worth the effort.
By the way, for the record, Windows 7 Beta/RC does support NFS out of the box.
They should be almost the same, but there is one big difference NFS normally works over UDP(can be configured to run over TCP) and HTTP over TCP.. so if you have a high packet loss then HTTP should be more stable!
NFS is not a file transfer protocol, it's a Network File System protocol. Properly configured and implemented, it should be possible for HTTP to beat it easily.
It will depend on the details of what you're trying to do. If you're just uploading and downloading entire files, then I suspect you'll be able to configure HTTP to do a lot better than NFS.
Recall also that NFS was created in an earlier time. Is NFS 2.0 still the latest version? I recall updating the code of an NFS implementation from 2 to 3. That was in 1996 or so.

Resources