msync equivalent in Windows - windows

what is the equivalent of msync [unix sys call] in windows? I am looking for MSDN api in c,C++ space.
More info on msync can be found at http://opengroup.org/onlinepubs/007908799/xsh/msync.html

FlushViewOfFile
Checkout the Python 2.6 mmapmodule.c for an example of FlushViewOfFile and msync in use:
/*
/ Author: Sam Rushing <rushing#nightmare.com>
/ Hacked for Unix by AMK
/ $Id: mmapmodule.c 65859 2008-08-19 17:47:13Z thomas.heller $
/ Modified to support mmap with offset - to map a 'window' of a file
/ Author: Yotam Medini yotamm#mellanox.co.il
/
/ mmapmodule.cpp -- map a view of a file into memory
/
/ todo: need permission flags, perhaps a 'chsize' analog
/ not all functions check range yet!!!
/
/
/ This version of mmapmodule.c has been changed significantly
/ from the original mmapfile.c on which it was based.
/ The original version of mmapfile is maintained by Sam at
/ ftp://squirl.nightmare.com/pub/python/python-ext.
*/
static PyObject *
mmap_flush_method(mmap_object *self, PyObject *args)
{
Py_ssize_t offset = 0;
Py_ssize_t size = self->size;
CHECK_VALID(NULL);
if (!PyArg_ParseTuple(args, "|nn:flush", &offset, &size))
return NULL;
if ((size_t)(offset + size) > self->size) {
PyErr_SetString(PyExc_ValueError, "flush values out of range");
return NULL;
}
#ifdef MS_WINDOWS
return PyInt_FromLong((long) FlushViewOfFile(self->data+offset, size));
#elif defined(UNIX)
/* XXX semantics of return value? */
/* XXX flags for msync? */
if (-1 == msync(self->data + offset, size, MS_SYNC)) {
PyErr_SetFromErrno(mmap_module_error);
return NULL;
}
return PyInt_FromLong(0);
#else
PyErr_SetString(PyExc_ValueError, "flush not supported on this system");
return NULL;
#endif
}
UPDATE:
I don't think you are going to find complete parity in the win32 mapped file APIs. The FlushViewOfFile API doesn't have a synchronous flavor (probably because of the possible impact of the cache manager). If precise control over when data is written to disk is required perhaps you can use the FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH flags with the CreateFile API when you create the handle to your mapped file?

Windows equivalent for flushing all filemapping is
void FlushToHardDrive(LPVOID fileMapAddress,HANDLE hFile)
{
FlushViewOfFile(fileMappAddress,0); //Async flush of dirty pages
FlushFileBuffers(hFiles); // flush metadata and wait
}
And for flushing part of filemapping
void FlushToHardDrive(LPVOID address,DWORD size, HANDLE hFile)
{
FlushViewOfFile(address,size); //Async flush of region
FlushFileBuffers(hFiles); // flush metadata and wait
}
This is described in MSDN here
Flag FILE_FLAG_NO_BUFFERING actually do nothing for memory mapped files (described here), also even file handle is created with these flags, metadata of file can be cached and not flushed, so FlushFileBuffers is always required for IO and MM, if you want to be completely sure that all data (including file access time) is saved. This behavior is described here
P.S. Some real world example: SQLite use MM-files almost only for reading, so when you are using MM-files for write/update you need understand all side effects for this scenario

I suspect FlushViewOfFile actually is the right thing. When I read the man page for msync, I would not assume that it is actually flushing the disk cache (the cache in the disk unit, as opposed to the system cache in main memory).
FlushViewOfFile won't return until the disk stack has completed the writes; like the msync documentation, it says nothing about what happens in the disk cache. We should probably take a look at making that more clear in the documentation.

Related

How to use the writeback connector by the Linux DRM api /libdrm?

I recently start learning Direct Rendering Manager and tried to use the writeback connector and the libdrm.
There are some documents and codes of how the kernel implements it, but not enough of how the userspace uses the api, such as drmModeAtomicAddProperty and drmModeAtomicCommit of the writeback connector in libdrm.
I have referred to libdrm/tests/modetest; Linux API Reference; linux/v6.0-rc5/source/drivers/gpu/drm/drm_writeback.c and some patch imformation.
I used modetest to get some driver information and tried my code using the libdrm:
/* I previously get the value of writeback connector:wbc_conn_id
* and create an output framebuffer:fb_id
* and find the active crtc: crtc_id
* next to get writeback connector property*/
props = drmModeObjectGetProperties(fd, wbc_conn_id, DRM_MODE_OBJECT_CONNECTOR);
printf("the number of properties in connector %d : %d \n", wbc_conn_id, props->count_props);
writeback_fb_id_property = get_property_id(fd, props,"WRITEBACK_FB_ID");
writeback_crtc_id_property = get_property_id(fd, props,"CRTC_ID");
printf("writeback_fb_property: %d\n",writeback_fb_id_property);
printf("writeback_crtc_id_property: %d\n",writeback_crtc_id_property);
drmModeFreeObjectProperties(props);
/* atomic writeback connector update */
req = drmModeAtomicAlloc();
drmModeAtomicAddProperty(req, wbc_conn_id, writeback_crtc_id_property, crtc_id);
printf ("%d\n",ret);
drmModeAtomicAddProperty(req, wbc_conn_id, writeback_fb_id_property, buf.fb_id);
ret = drmModeAtomicCommit(fd, req, DRM_MODE_ATOMIC_ALLOW_MODESET, NULL);
if (ret) {
fprintf(stderr, "Atomic Commit failed [1]\n");
return 1;
}
drmModeAtomicFree(req);
printf("drmModeAtomicCommit Set Writeback\n");
getchar();
It turns out that the drmModeAtomicCommit failed. Was there any property set wrongly or missed? The value of two steps of addproperty is 1 and 2, and the atomic commit returned EINVAL -22 .
I've looked around but found no solution or similar question about the property set of writebackconnector.

Enabling Closed-Display Mode w/o Meeting Apple's Requirements

EDIT:
I have heavily edited this question after making some significant new discoveries and the question not having any answers yet.
Historically/AFAIK, keeping your Mac awake while in closed-display mode and not meeting Apple's requirements, has only been possible with a kernel extension (kext), or a command run as root. Recently however, I have discovered that there must be another way. I could really use some help figuring out how to get this working for use in a (100% free, no IAP) sandboxed Mac App Store (MAS) compatible app.
I have confirmed that some other MAS apps are able to do this, and it looks like they might be writing YES to a key named clamshellSleepDisabled. Or perhaps there's some other trickery involved that causes the key value to be set to YES? I found the function in IOPMrootDomain.cpp:
void IOPMrootDomain::setDisableClamShellSleep( bool val )
{
if (gIOPMWorkLoop->inGate() == false) {
gIOPMWorkLoop->runAction(
OSMemberFunctionCast(IOWorkLoop::Action, this, &IOPMrootDomain::setDisableClamShellSleep),
(OSObject *)this,
(void *)val);
return;
}
else {
DLOG("setDisableClamShellSleep(%x)\n", (uint32_t) val);
if ( clamshellSleepDisabled != val )
{
clamshellSleepDisabled = val;
// If clamshellSleepDisabled is reset to 0, reevaluate if
// system need to go to sleep due to clamshell state
if ( !clamshellSleepDisabled && clamshellClosed)
handlePowerNotification(kLocalEvalClamshellCommand);
}
}
}
I'd like to give this a try and see if that's all it takes, but I don't really have any idea about how to go about calling this function. It's certainly not a part of the IOPMrootDomain documentation, and I can't seem to find any helpful example code for functions that are in the IOPMrootDomain documentation, such as setAggressiveness or setPMAssertionLevel. Here's some evidence of what's going on behind the scenes according to Console:
I've had a tiny bit of experience working with IOMProotDomain via adapting some of ControlPlane's source for another project, but I'm at a loss for how to get started on this. Any help would be greatly appreciated. Thank you!
EDIT:
With #pmdj's contribution/answer, this has been solved!
Full example project:
https://github.com/x74353/CDMManager
This ended up being surprisingly simple/straightforward:
1. Import header:
#import <IOKit/pwr_mgt/IOPMLib.h>
2. Add this function in your implementation file:
IOReturn RootDomain_SetDisableClamShellSleep (io_connect_t root_domain_connection, bool disable)
{
uint32_t num_outputs = 0;
uint32_t input_count = 1;
uint64_t input[input_count];
input[0] = (uint64_t) { disable ? 1 : 0 };
return IOConnectCallScalarMethod(root_domain_connection, kPMSetClamshellSleepState, input, input_count, NULL, &num_outputs);
}
3. Use the following to call the above function from somewhere else in your implementation:
io_connect_t connection = IO_OBJECT_NULL;
io_service_t pmRootDomain = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("IOPMrootDomain"));
IOServiceOpen (pmRootDomain, current_task(), 0, &connection);
// 'enable' is a bool you should assign a YES or NO value to prior to making this call
RootDomain_SetDisableClamShellSleep(connection, enable);
IOServiceClose(connection);
I have no personal experience with the PM root domain, but I do have extensive experience with IOKit, so here goes:
You want IOPMrootDomain::setDisableClamShellSleep() to be called.
A code search for sites calling setDisableClamShellSleep() quickly reveals a location in RootDomainUserClient::externalMethod(), in the file iokit/Kernel/RootDomainUserClient.cpp. This is certainly promising, as externalMethod() is what gets called in response to user space programs calling the IOConnectCall*() family of functions.
Let's dig in:
IOReturn RootDomainUserClient::externalMethod(
uint32_t selector,
IOExternalMethodArguments * arguments,
IOExternalMethodDispatch * dispatch __unused,
OSObject * target __unused,
void * reference __unused )
{
IOReturn ret = kIOReturnBadArgument;
switch (selector)
{
…
…
…
case kPMSetClamshellSleepState:
fOwner->setDisableClamShellSleep(arguments->scalarInput[0] ? true : false);
ret = kIOReturnSuccess;
break;
…
So, to invoke setDisableClamShellSleep() you'll need to:
Open a user client connection to IOPMrootDomain. This looks straightforward, because:
Upon inspection, IOPMrootDomain has an IOUserClientClass property of RootDomainUserClient, so IOServiceOpen() from user space will by default create an RootDomainUserClient instance.
IOPMrootDomain does not override the newUserClient member function, so there are no access controls there.
RootDomainUserClient::initWithTask() does not appear to place any restrictions (e.g. root user, code signing) on the connecting user space process.
So it should simply be a case of running this code in your program:
io_connect_t connection = IO_OBJECT_NULL;
IOReturn ret = IOServiceOpen(
root_domain_service,
current_task(),
0, // user client type, ignored
&connection);
Call the appropriate external method.
From the code excerpt earlier on, we know that the selector must be kPMSetClamshellSleepState.
arguments->scalarInput[0] being zero will call setDisableClamShellSleep(false), while a nonzero value will call setDisableClamShellSleep(true).
This amounts to:
IOReturn RootDomain_SetDisableClamShellSleep(io_connect_t root_domain_connection, bool disable)
{
uint32_t num_outputs = 0;
uint64_t inputs[] = { disable ? 1 : 0 };
return IOConnectCallScalarMethod(
root_domain_connection, kPMSetClamshellSleepState,
&inputs, 1, // 1 = length of array 'inputs'
NULL, &num_outputs);
}
When you're done with your io_connect_t handle, don't forget to IOServiceClose() it.
This should let you toggle clamshell sleep on or off. Note that there does not appear to be any provision for automatically resetting the value to its original state, so if your program crashes or exits without cleaning up after itself, whatever state was last set will remain. This might not be great from a user experience perspective, so perhaps try to defend against it somehow, for example in a crash handler.

Workaround for a certain IOServiceOpen() call requiring root privileges

Background
It is possible to perform a software-controlled disconnection of the power adapter of a Mac laptop by creating an DisableInflow power management assertion.
Code from this answer to an SO question can be used to create said assertion. The following is a working example that creates this assertion until the process is killed:
#include <IOKit/pwr_mgt/IOPMLib.h>
#include <unistd.h>
int main()
{
IOPMAssertionID neverSleep = 0;
IOPMAssertionCreateWithName(kIOPMAssertionTypeDisableInflow,
kIOPMAssertionLevelOn,
CFSTR("disable inflow"),
&neverSleep);
while (1)
{
sleep(1);
}
}
This runs successfully and the power adapter is disconnected by software while the process is running.
What's interesting, though, is that I was able to run this code as a regular user, without root privileges, which wasn't supposed to happen. For instance, note the comment in this file from Apple's open source repositories:
// Disables AC Power Inflow (requires root to initiate)
#define kIOPMAssertionTypeDisableInflow CFSTR("DisableInflow")
#define kIOPMInflowDisableAssertion kIOPMAssertionTypeDisableInflow
I found some code which apparently performs the actual communication with the charger; it can be found here. The following functions, from this file, appears to be of particular interest:
IOReturn
AppleSmartBatteryManagerUserClient::externalMethod(
uint32_t selector,
IOExternalMethodArguments * arguments,
IOExternalMethodDispatch * dispatch __unused,
OSObject * target __unused,
void * reference __unused )
{
if (selector >= kNumBattMethods) {
// Invalid selector
return kIOReturnBadArgument;
}
switch (selector)
{
case kSBInflowDisable:
// 1 scalar in, 1 scalar out
return this->secureInflowDisable((int)arguments->scalarInput[0],
(int *)&arguments->scalarOutput[0]);
break;
// ...
}
// ...
}
IOReturn AppleSmartBatteryManagerUserClient::secureInflowDisable(
int level,
int *return_code)
{
int admin_priv = 0;
IOReturn ret = kIOReturnNotPrivileged;
if( !(level == 0 || level == 1))
{
*return_code = kIOReturnBadArgument;
return kIOReturnSuccess;
}
ret = clientHasPrivilege(fOwningTask, kIOClientPrivilegeAdministrator);
admin_priv = (kIOReturnSuccess == ret);
if(admin_priv && fOwner) {
*return_code = fOwner->disableInflow( level );
return kIOReturnSuccess;
} else {
*return_code = kIOReturnNotPrivileged;
return kIOReturnSuccess;
}
}
Note how, in secureInflowDisable(), root privileges are checked for prior to running the code. Note also this initialization code in the same file, again requiring root privileges, as explicitly pointed out in the comments:
bool AppleSmartBatteryManagerUserClient::initWithTask(task_t owningTask,
void *security_id, UInt32 type, OSDictionary * properties)
{
uint32_t _pid;
/* 1. Only root processes may open a SmartBatteryManagerUserClient.
* 2. Attempts to create exclusive UserClients will fail if an
* exclusive user client is attached.
* 3. Non-exclusive clients will not be able to perform transactions
* while an exclusive client is attached.
* 3a. Only battery firmware updaters should bother being exclusive.
*/
if ( kIOReturnSuccess !=
clientHasPrivilege(owningTask, kIOClientPrivilegeAdministrator))
{
return false;
}
// ...
}
Starting from the code from the same SO question above (the question itself, not the answer), for the sendSmartBatteryCommand() function, I wrote some code that calls the function passing kSBInflowDisable as the selector (the variable which in the code).
Unlike the code using assertions, this one only works as root. If running as a regular user, IOServiceOpen() returns, weirdly enough, kIOReturnBadArgument (not kIOReturnNotPrivileged, as I would have expected). Perhaps this has to do with the initWithTask() method above.
The question
I need to perform a call with a different selector to this same Smart Battery Manager kext. Even so, I can't even get to the IOConnectCallMethod() since IOServiceOpen() fails, presumably because the initWithTask() method prevents any non-root users from opening the service.
The question, therefore, is this: how is IOPMAssertionCreateWithName() capable of creating a DisableInflow assertion without root privileges?
The only possibility I can think of is if there's a root-owned process to which requests are forwarded, and which performs the actual work of calling IOServiceOpen() and later IOConnectCallMethod() as root.
However, I'm hoping there's a different way of calling the Smart Battery Manager kext which doesn't require root (one that doesn't involve the IOServiceOpen() call.) Using IOPMAssertionCreateWithName() itself is not possible in my application, since I need to call a different selector within that kext, not the one that disables inflow.
It's also possible this is in fact a security vulnerability, which Apple will now fix in a future release as soon as it is alerted to this question. That would be too bad, but understandable.
Although running as root is a possibility in macOS, it's obviously desirable to avoid privilege elevation unless absolutely necessary. Also, in the future I'd like to run the same code under iOS, where it's impossible to run anything as root, in my understanding (note this is an app I'm developing for my own personal use; I understand linking to IOKit wipes out any chance of getting the app published in the App Store).

How can I delete a file and send it to the recycle bin in Vista/7 using IFileOperation?

According to the documentation for IFileOperation::SetOperationFlags, the FOFX_RECYCLEONDELETE flag was introduced in Windows 8.
I would like to delete files and send them to the recycle bin. How is it possible to do that using IFileOperation in Vista and Windows 7?
I know that SHFileOperation supports that functionality, but I don't want to use SHFileOperation as Microsoft are telling us to use IFileOperation in its place. Is this possible using IFileOperation, and if so, how is it to be done?
The documentation for SetOperationFlags says:
This member can be a combination of the following flags. FOF flags are defined in Shellapi.h and FOFX flags are defined in Shobjidl.h.
So you can use the exact same flag, FOF_ALLOWUNDO, that you use with SHFileOperation to direct a delete action to move to the recycle bin.
 FOFX_RECYCLEONDELETE flag was introduced in Win 8 - will it work in Vista/7?
Since FOFX_RECYCLEONDELETE was introduced in Windows 8, then it did not exist in Vista/7, so no, it will not work in those versions.
There's always SHFileOperation but I'd rather use a more up-to-date Win32 API method. Anything else to know? Any alternate ways of recycling files/folders?
SHFileOperation() is the only documented way to recycle files/folders:
When used to delete a file, SHFileOperation permanently deletes the file unless you set the FOF_ALLOWUNDO flag in the fFlags member of the SHFILEOPSTRUCT structure pointed to by lpFileOp. Setting that flag sends the file to the Recycle Bin. If you want to simply delete a file and guarantee that it is not placed in the Recycle Bin, use DeleteFile.
That same flag is available in IFileOperation, but its documented behavior is different:
Preserve undo information, if possible.
Prior to Windows Vista, operations could be undone only from the same process that performed the original operation.
In Windows Vista and later systems, the scope of the undo is a user session. Any process running in the user session can undo another operation. The undo state is held in the Explorer.exe process, and as long as that process is running, it can coordinate the undo functions.
That is why FOFX_RECYCLEONDELETE had to be introduced - to re-add the old Recycle Bin behavior that had been lost when IFileOperation was first introduced.
I have verified David Heffernan's assessment of the FOF_ALLOWUNDO flag's use with IFileOperation to send items to the recycle bin. Here's the code. Apparently SHCreateItemFromParsingName is MS's way of saying create an item from a string. This code is catered to C++ with Qt. You'll have to initialize COM first of course.
void Worker::deleteItem(QString item)
{
HRESULT hr;
IFileOperation *pfo;
wchar_t *itemWChar = new wchar_t[item.length()+1];
item.toWCharArray(itemWChar);
itemWChar[item.length()] = 0;
PCWSTR itemPCWSTR = itemWChar;
hr = CoCreateInstance(CLSID_FileOperation,
NULL,
CLSCTX_ALL,
//IID_IFileOperation,
IID_PPV_ARGS(&pfo));
if (!SUCCEEDED(hr))
{
//error handling here
return;
}
hr = pfo->SetOperationFlags(FOF_ALLOWUNDO | FOF_NOCONFIRMATION);
if (!SUCCEEDED(hr))
{
//error handling here
return;
}
IShellItem *deleteItem = NULL;
hr = SHCreateItemFromParsingName(itemPCWSTR,
NULL,
IID_PPV_ARGS(&deleteItem));
if (!SUCCEEDED(hr))
{
//error handling here
return;
}
hr = pfo->DeleteItem(deleteItem,NULL);
if (deleteItem != NULL)
{
deleteItem->Release();
}
if (!SUCCEEDED(hr))
{
//error handling here
return;
}
hr = pfo->PerformOperations();
if (!SUCCEEDED(hr))
{
//error handling here
return;
}
pfo->Release();
delete[] itemWChar;
}

Running a kernel with CUDAfy .NET using OpenCL asynchronously

I am trying to make asynchronous kernel calls to my GPGPU using CUDAfy .NET.
When I pass values to the kernel and copy them back to the host, I do not always get the value I expect.
I have a structure Foo with a byte Bar:
[Cudafy]
public struct Foo {
public byte Bar;
}
And I have a kernel I want to call:
[Cudafy]
public static void simulation(GThread thread, Foo[] f)
{
f[0].Bar = 3;
thread.SyncThreads();
}
I have a single thread with streamID = 1 (I tried using multiple threads, and noticed the issue. Reducing to a single thread didn't seem to fix the issue though).
//allocate
streamID = 1;
count = 1;
gpu.CreateStream(streamID);
Foo[] sF = new Foo[count];
IntPtr hF = gpu.HostAllocate<Foo>(count);
Foo[] dF = gpu.Allocate<Foo>(sF);
while (true)
{
//set value
sF[0].Bar = 1;
byte begin = sF[0].Bar;
//host -> pinned
GPGPU.CopyOnHost<Foo>(sF, 0, hF, 0, count);
sF[0].Bar = 2;
lock (gpu)
{
//pinned -> device
gpu.CopyToDeviceAsync<Foo>(hF, 0, dF, 0, count, streamID);
//run
gpu.Launch().simulation(dF);
//device -> pinned
gpu.CopyFromDeviceAsync<Foo>(dF, 0, hF, 0, count, streamID);
}
//WAIT
gpu.SynchronizeStream(streamID);
//pinned -> host
GPGPU.CopyOnHost<Foo>(hF, 0, sF, 0, count);
byte end = sF[0].Bar;
}
//de-allocate
gpu.Free(dF);
gpu.HostFree(hF);
gpu.DestroyStream(streamID);
First I create a stream on the GPU.
I am creating a regular structure Foo array of size 1 (sF) and setting it's Bar value to 1. Then I create pinned memory on the host (hF) for Foo as well. I also create memory on the device for Foo (dF).
I initialize the structure's Bar value to 1 then I copy it to the pinned memory (As a check, I set the value to 2 for the structure after copying to pinned, you'll see why later). Then I use a lock to ensure I have full access to the GPU and I queue a copy to dF, a run for the kernel, and a copy from dF. At this point I don't know when this will all actually run on the GPU... so I can call SynchronizeStream to wait on the host until the device is done.
When it's done, I can copy the pinned memory (hF) to the shared memory (sF). When I get the value, it's usually a 3 (which was set on the device) or a 1 (which means either the value wasn't set in the kernel, or the new value wasn't copied to the pinned memory). I do know that the pinned memory is copied to the structure because the structure never has the value of 2.
Over many runs, a small percentage is runs results in something other than begin=1 and end=3. It would always be begin=1, end=1 and it happens about 5-10% of the time.
I have no idea why this happens. I know it generally highlights a race condition, but by calling the sync calls, I would expect the async calls to work in a predictable fashion.
Why would I be encountering this kind of issue with this code?
Thank you so much!
-Phil
I just figured out the issue that was occurring. While the launch was being done asynchronously... I didn't include the stream for the launch.
Changing my launch to be:
gpu.Launch(gridsize,blocksize,streamID).simulation(dF);
resolved the problem. It seems that the launches were occurring on stream 0 and the stream 1 and 2 were being synced. So sometimes the data gets set, sometimes it doesn't. A race condition.

Resources