Using DiskVolumeInfo (Cluster Failover API) - winapi

I found the DiskVolumeInfo property -- I'd like to use it to get some disk information in a clustered setup.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb309235(v=vs.85).aspx
The problem is I have no idea what technology is required to get this data. This doesn't resemble the standard C/C++/C#/VB format of function/method reference.
Question: How do I get the DiskVolumeInfo data?
Ideally I could write the binary output directly to a file, say data.bin.
Any ideas would be helpful, thanks.

The process for getting object properties is described here.
Looks like you need to call the ClusterResourceControl function with a handle to the physical disk resource and the CLUSCTL_RESOURCE_GET_PRIVATE_PROPERTIES control code. You can then use ResUtilFindBinaryProperty to extract the DiskVolumeInfo property from the property list returned.

For anyone still interested:
As given here CLUSCTL_RESOURCE_STORAGE_GET_DISK_INFO_EX is a better way to do this.

Related

Is there a Go implementation of fs.ReadDir for Google Cloud Storage?

I am creating a web application in Go.
I have modified my working code so that it can read and write files on both a local filesystem and a bucket of Google Cloud Storage based on a flag.
Basically I included a small package in the middle, and I implemented my-own-pkg.readFile or my-own-pkg.WriteFile and so on...
I have replaced all calls in my code where I read or save files from the local filesystem with calls to my methods.
Finally these methods include a simple switch case that runs the standard code to read/write locally or the code to read/wrote from/to a gcp bucket.
My current problem
In some parts I need to perform a ReadDir to get the list of DirEntries and then cycle though them. I do not want to change my code except for replacing os.readDir with my-own-pkg.ReadDir.
So far I understand that there is not a native function in the gcp module. So I suppose (but here I need your help because I am just guessing) that I would need an implementation of fs.FS for the gcp. It being a new feature of go 1.6 I guess it's too early to find one.
So I am trying to create simply a my-own-pkg.ReadDir(folderpath) function that does the following:
case "local": { }
case "gcp": {
<Use gcp code sample to list objects in my bucket with Query.Prefix = folderpath and
Query.Delimiter="/"
Then create a slice of my-own-pkg.DirEntry (because fs.DkrEntry is just an interface and so it needs to be implemented... :-( ) and return them.
In order to do so I need to implement also the interface fs.DirEntry (which requires the implementation of interface for FileInfo and maybe something else...)
Question 1) is this the right path to follow to solve my issue or is there a better way?
Question 2) (only) if so, does the gcp method that lists object with a prefix and a delimiter return just files? I can't see a method that returns also the list of prefixes found
(If I have prefix/file1.txt and prefix/a/file2.txt I would like to get both "file1.txt" and "a" as files and prefixes...)
I hope I was enough clear... This time I can't include code because it's incomplete... But in case it helps I can paste what I can.
NOTE: by the way go 1.6 allowed me to solve elegantly a similar issue when dealing with assets either embedded or on the filesystem thanks to the existing implementation of fs.FS and the related ReadDirFS. So good if I could follow the same route 🙂
By the way I am going on studying and experimenting so in case I am successful I will contribute as well :-)
I think your abstraction layer is good but you need to know something on Cloud Storage: The directory doesn't exist.
In fact, all the object are put at the root of the bucket / and the fully qualified name of the object is /path/to/object.file. You can filter on a prefix, that return all the object (i.e. file because directory doesn't exist) with the same path prefix.
It's not a full answer to your question but I'm sure that you can think and redesign the rest of your code with this particularity in mind.

BluetoothGATTSetCharacteristicValue return Invalid Handle "E_HANDLE"

I am trying to use BluetoothGATTSetCharacteristicValue to set a value for a given characteristic in a service. I read in the method documentation that it needs a handle to the service which I don't know how to obtain it.
I tried to use the "ServiceHandle" member of the BTH_LE_GATT_CHARACTERISTIC structure but it doesn't work.
I found the solution and I would like to share it here.
The only way to open a handle on the service is to use the UUID of the service instead of the UUID of the device while enumerating the devices using the method SetupDiGetClassDevs.
Please check the following thread for more details.
https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/65c9cf4e-e225-4fc3-8c2c-66cd2401d3ed/how-to-establish-a-connection-from-windows-8-pc-to-a-bluetooth-low-energy-device?forum=wdk

Ruby viewpoint with EWS

I am trying to get started using viewpoint against EWS within Ruby, and it's not making a lot of sense at the moment. I am wondering where I can get some good example code, or some pointers? I am using 1.0.0-beta.
For example: I know the name of the calendar folder I want to use, so I could search for it, but how to access methods in that folder once I find it? What are the appropriate parameters, etc...
Any advice?
If you haven't read it yet I would recommend the README file in the repository. It has a couple of examples that should put you on the right path. Also, the generated API documentation should give you enough to work with.
http://rubydoc.info/github/WinRb/Viewpoint/frames
At a very basic level you can get all of your calendar events with the following code:
calendar = client.get_folder :calendar
events = calendar.items
I hope that gives you a little more to get started with.
Follow-up:
Again, I would point you to the API docs for concrete methods like #items. There are however dynamically added methods depending on the type that you can fetch with obj.ews_methods. In the case of CalendarItem one of those methods is #name so you can call obj.name to get the folder name. The dynamic methods are all backed by a formatted Hash based on the returned SOAP packet. You can see it in its raw format by issuing obj.ews_item
Cheers,
Dan

How can Customizing the IMFByteStream?

. Hello All?
I want to customize IMFByteStream interface, but i'm facing some problems.
Before explains my problems, describes how to create from what i've got.
First, customized IMFByteStream has IMFByteStream's instance that created using MFCreateFile method. Therefore, we need to implement the necessary ones (For example, BeginRead, Read, etc..)
Second, we need to decrypt to the received data. Because the file was encrypted.
As a result, the read sequence was following.
CustomByteStream::BeginRead() -> CustomByteStream::Read() -> IMFByteStream::Read() -> CustomByteStream::Decrypt() -> Passes the decrypted data.
But, I don't know how to pass the data. AsyncResult or AsyncCallback should I use? I don't know how.
Please help me. Thank you.
If you implement IMFByteStream, you have to implement IMFAsyncCallback too.
I can't explain here. But when i will update my project with Mpeg2 source: MFNode
you will see an implementation of IMFByteStream. I use it because original IMFByteStream fails on some video files with no reason. My implementation works well with all files i've tested. (For now my implementation does not handle big file.)
Edit: i checked my code. I don't implement IMFByteStream. i created a class that acts like IMFByteStream. I implement IMFAsyncCallback for BeginRead/EndRead.

distinguish use cases in NSAutosaveElsewhereOperation

I try to add AutoSave support to the Core Data File Wrapper example
Now if i have a new/untitled document writeSafelyToURL is called with the NSAutosaveElsewhereOperation type.
The bad thing is, I get this type in both typical use cases
- new file: which store a complete new document by creating the file wrapper and the persistent store file
- save diff: where the file wrapper already exists and only an update is required.
Does somebody else already handled this topic or did somebody already migrated this?
The original sample use the originalStoreURL to distinguish those two use cases, which solution worked best for you?
Thanks

Resources