I am trying atm to create a shared memory file for my process. The filename constists of several parts to identify the process the SHM belongs to and whats its content. An example would be:
shm_pl_dev_system_25077
I create all the files in a directory I created in /tmp where I have full write and read permissions.
So the complete Path would be:
/tmp/pl_dev/shm_pl_dev_system_25077
I create several files there, some fifo pipes and other stuff and also the shm. The only Problem I get is that shm_open will always return the errno 63 (ENAMETOOLONG).
Can you tell me what the issue here is?
Here is the code:
handle_ = ::shm_open(shm_name.get(), O_RDWR, 0755);
if (handle_ == -1 && errno == ENOENT)
{
// SHM does not yet exists, so create it!
handle_ = ::shm_open(shm_name.get(), O_CREAT | O_RDWR, 0755);
if (handle_ != -1) {
created_ = true;
}
else
{
if (!silent_)
{
log.error("Couldn't create the SHM (%d).", errno);
}
return false;
}
}
Okay. As it seems OSX is very limited in the filename of SHMs... The maximum length for a filename currently is 31 chars per section (see PSHMNAMELENGTH in /usr/include/sys/posix_shm.h)
Related
I'm writing a small image viewer application, which use the Shell API to access to the files.
I noticed that, every time I access an image on an external device, like my connected phone, the file is copied in a local cache, named INetCache.
Below is an example of an image path on my external device:
This PC\Apple iPhone\Internal Storage\DCIM\202005__\IMG_2768.HEIC
and the same image path from the PIDL received by the Shell when I try to open it:
C:\Users\Admin\AppData\Local\Microsoft\Windows\INetCache\IE\MD7CNXNX\IMG_2768[1].HEIC
As you can see, the file was copied in the INetCache, and even duplicated. Would it be possible to open the file from my external device without copying it in the INetCache? And if yes, how may I achieve that (if possible using the Shell API)?
UPDATE on 11.08.2022
Below is the code I use to get the PIDL from an IDataObject I receive after a drag&drop operation, which contains the file to open:
std::wstring GetPIDLFromDataObject(IDataObject* pDataObject)
{
std::wstring line = L"PIDL result:\r\n";
if (!pDataObject)
return line;
CComPtr<IShellItemArray> pShellItemArray;
if (FAILED(::SHCreateShellItemArrayFromDataObject(pDataObject, IID_PPV_ARGS(&pShellItemArray))))
return line;
CComPtr<IEnumShellItems> pShellItemEnum;
pShellItemArray->EnumItems(&pShellItemEnum);
if (!pShellItemEnum)
return line;
for (CComPtr<IShellItem> pShellItem; pShellItemEnum->Next(1, &pShellItem, nullptr) == S_OK; pShellItem.Release())
{
CComHeapPtr<wchar_t> spszName;
if (SUCCEEDED(pShellItem->GetDisplayName(SIGDN_DESKTOPABSOLUTEPARSING, &spszName)))
{
line += spszName;
line += L"\r\n";
}
CComHeapPtr<ITEMIDLIST_ABSOLUTE> pIDL;
if (SUCCEEDED(CComQIPtr<IPersistIDList>(pShellItem)->GetIDList(&pIDL)))
{
UINT cb = ::ILGetSize(pIDL);
BYTE* pb = reinterpret_cast<BYTE*>(static_cast<PIDLIST_ABSOLUTE>(pIDL));
for (UINT i = 0; i < cb; i++)
{
WCHAR hexArray[4];
::StringCchPrintf(hexArray, ARRAYSIZE(hexArray), L" % 02X ", pb[i]);
line += hexArray;
}
line += L"\r\n";
}
}
return line;
}
Actually, the problem is, When system getting upgrade it should write "DISABLE_BACKUP" file in root directory. When it comes up, i have to check whether the file has been in root or not.
if ((dir = opendir ("/"))!=NULL)
{
while ((ent = readdir(dir)) != NULL)
{
printf ("%s\n", ent->d_name);
//Here i have to compare the filename (DISABLE_BACKUP) with the string "DISABLE_BACKUP" and has to raise log entry.
}
closedir(dir);
}
The C function for comparing strings is strcmp():
if (strcmp(ent->d_name, "DISABLE_BACKUP")==0) {
// Found it!
Perhaps a better way to see if the file "DISABLE_BACKUP" exists is access():
#include <unistd.h>
...
if (access(fname, F_OK) != -1) {
// file exists
} else {
// file doesn't exist
}
I wrote a function to watch a file (given an fd) growing to a certain size including a timeout. I'm using kqueue()/kevent() to wait for the file to be "extended" but after I get the notification that the file grew I have to check the file size (and compare it against the desired size). That seems to be easy but I cannot figure out a way to do that reliably in POSIX.
NB: The timeout will hit if the file doesn't grow at all for the time specified. So, this is not an absolute timeout, just a timeout that some growing happens to the file. I'm on OS X but this question is meant for "every POSIX that has kevent()/kqueue()", that should be OS X and the BSDs I think.
Here's my current version of my function:
/**
* Blocks until `fd` reaches `size`. Times out if `fd` isn't extended for `timeout`
* amount of time. Returns `-1` and sets `errno` to `EFBIG` should the file be bigger
* than wanted.
*/
int fwait_file_size(int fd,
off_t size,
const struct timespec *restrict timeout)
{
int ret = -1;
int kq = kqueue();
struct kevent changelist[1];
if (kq < 0) {
/* errno set by kqueue */
ret = -1;
goto out;
}
memset(changelist, 0, sizeof(changelist));
EV_SET(&changelist[0], fd, EVFILT_VNODE, EV_ADD | EV_ENABLE | EV_CLEAR, NOTE_DELETE | NOTE_RENAME | NOTE_EXTEND, 0, 0);
if (kevent(kq, changelist, 1, NULL, 0, NULL) < 0) {
/* errno set by kevent */
ret = -1;
goto out;
}
while (true) {
{
/* Step 1: Check the size */
int suc_sz = evaluate_fd_size(fd, size); /* IMPLEMENTATION OF THIS IS THE QUESTION */
if (suc_sz > 0) {
/* wanted size */
ret = 0;
goto out;
} else if (suc_sz < 0) {
/* errno and return code already set */
ret = -1;
goto out;
}
}
{
/* Step 2: Wait for growth */
int suc_kev = kevent(kq, NULL, 0, changelist, 1, timeout);
if (0 == suc_kev) {
/* That's a timeout */
errno = ETIMEDOUT;
ret = -1;
goto out;
} else if (suc_kev > 0) {
if (changelist[0].filter == EVFILT_VNODE) {
if (changelist[0].fflags & NOTE_RENAME || changelist[0].fflags & NOTE_DELETE) {
/* file was deleted, renamed, ... */
errno = ENOENT;
ret = -1;
goto out;
}
}
} else {
/* errno set by kevent */
ret = -1;
goto out;
}
}
}
out: {
int errno_save = errno;
if (kq >= 0) {
close(kq);
}
errno = errno_save;
return ret;
}
}
So the basic algorithm works the following way:
Set up the kevent
Check size
Wait for file growth
Steps 2 and 3 are repeated until the file reached the wanted size.
The code uses a function int evaluate_fd_size(int fd, off_t wanted_size) which will return < 0 for "some error happened or file larger than wanted", == 0 for "file not big enough yet", or > 0 for file has reached the wanted size.
Obviously this only works if evaluate_fd_size is reliable in determining file size. My first go was to implement it with off_t eof_pos = lseek(fd, 0, SEEK_END) and compare eof_pos against wanted_size. Unfortunately, lseek seems to cache the results. So even when kevent returned with NOTE_EXTEND, so the file grew, the result may be the same! Then I thought to switch to fstat but found articles that fstat caches as well.
The last thing I tried was using fsync(fd); before off_t eof_pos = lseek(fd, 0, SEEK_END); and suddenly things started working. But:
Nothing states that fsync() really solves my problem
I don't want to fsync() because of performance
EDIT: It's really hard to reproduce but I saw one case in which fsync() didn't help. It seems to take (very little) time until the file size is larger after a NOTE_EXTEND event hit user space. fsync() probably just works as a good enough sleep() and therefore it works most of the time :-.
So, in other words: How to reliably check file size in POSIX without opening/closing the file which I cannot do because I don't know the file name. Additionally, I can't find a guarantee that this would help
By the way: int new_fd = dup(fd); off_t eof_pos = lseek(new_fd, 0, SEEK_END); close(new_fd); did not overcome the caching issue.
EDIT 2: I also created an all in one demo program. If it prints Ok, success before exiting, everything went fine. But usually it prints Timeout (10000000) which manifests the race condition: The file size check for the last kevent triggered is smaller than the actual file size at this very moment. Weirdly when using ftruncate() to grow the file instead of write() it seems to work (you can compile the test program with -DUSE_FTRUNCATE to test that).
Nothing states that fsync() really solves my problem
I don't want to fsync() because of performance
Your problem isn't "fstat caching results", it's the I/O system buffering writes. Fstat doesn't get updated until the kernel flushes the I/O buffers to the underlying file system.
This is why fsync fixes your problem and any solution to your problem more or less has to do the equivalent of fsync. ( This is what the open/close solution does as a side effect. )
Can't help you with 2 because I don't see any way to avoid doing fsync.
Loading bundle from memory is possible by NSCreateObjectFileImageFromMemory function. Does anyone have successful experience in this area? Does anyone have working sample for this function?
My code is as:
text srcPath = "/Applications/TextEdit.app/Contents/MacOS/TextEdit";
data_t data;
data.loadFromFile(srcPath);
void *addr;
kern_return_t err;
NSObjectFileImage img = nil;
NSObjectFileImageReturnCode dyld_err;
err = vm_allocate(mach_task_self(), (vm_address_t *)&addr,
data.length(), true);
if(err == 0)
{
//err = vm_write(mach_task_self(), (vm_address_t)addr,
//(vm_address_t)(char*)data, data.length());
memcpy(addr, (char*)data, data.length());
if(err == 0)
dyld_err =
NSCreateObjectFileImageFromMemory(addr, data.length(), &img);
// error is NSObjectFileImageFailure
}
The img variable is null (error is NSObjectFileImageFailure). Why?
Thankyou.
From the manpage, it looks like only MH_BUNDLE files can be loaded with NSCreateObjectFileImageFromMemory() and friends.
MH_BUNDLE files are explained here.
The MH_BUNDLE file type is the type typically used by code that you
load at runtime (typically called bundles or plug-ins). By convention,
the file name extension for this format is .bundle.
Note that that manpage is for 10.4 and there does not appear to be a newer version available.
We want to write to "foo.txt" in a given directory. If "foo.txt" already exists, we want to write to "foo-1.txt", and so on.
There are a few code snippets around that try and answer this question, but none are quite satisfactory. E.g. this solution at CocoaDev uses NSFileManager to test if a path exists to create a safe path. However, this leads to obvious race conditions between obtaining a path and writing to it. It would be safer to attempt atomic writes, and loop the numeric suffix on failure.
Go at it!
Use the open system call with the O_EXCL and O_CREAT options. If the file doesn't already exist, open will create it, open it, and return the file descriptor to you; if it does exist, open will fail and set errno to EEXIST.
From there, it should be obvious how to construct the loop that tries incrementing filenames until it returns a file descriptor or constructs a filename too long. On the latter point, make sure you check errno when open fails—EEXIST and ENAMETOOLONG are just two of the errors you could encounter.
int fd;
uint32_t counter;
char filename[1024]; // obviously unsafe
sprintf(filename, "foo.txt");
if( (fd = open(filename, O_CREAT | O_EXCL | O_EXLOCK, 0644)) == -1 && errno == EEXIST )
{
for( counter = 1; counter < UINT32_MAX; counter++ ) {
sprintf(filename, "foo-%u.txt", counter);
if( (fd = open(filename, O_CREAT | O_EXCL | O_EXLOCK, 0644)) == -1 && errno == EEXIST )
continue;
else
break;
}
}
if( fd == -1 && counter == UINT32_MAX ) {
fprintf(stderr, "too many foo-files\n");
} else if( fd == -1 ) {
fprintf(stderr, "could not open file: %s\n", strerror(errno));
}
// otherwise fd is an open file with an atomically unique name and an
// exclusive lock.
How about:
Write the file to a temporary directory where you know there's no risk of collision
Use NSFileManager to move the file to the preferred destination
If step 3 fails due to a file already existing, add/increment a numeric suffix and repeat step 2
You'd be basically re-creating Cocoa's atomic file write handling, but adding in the feature of ensuring a unique filename. A big advantage of this approach is that if the power goes out or your app crashes mid-write, the half-finished file will be tucked away in a tmp folder and deleted by the system; not left for the user to try and work with.